diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md deleted file mode 100644 index 575ca6eafac58cdc19957be75379e78499abb160..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md +++ /dev/null @@ -1,16 +0,0 @@ - -

Deewaar in hindi torrent download: How to watch the classic Bollywood movie online

-

If you are a fan of Bollywood movies, you have probably heard of Deewaar, one of the most iconic films in Indian cinema history. Released in 1975, Deewaar is a crime drama that explores the themes of brotherhood, loyalty, corruption, and social injustice. It stars Amitabh Bachchan and Shashi Kapoor as two brothers who take different paths in life, one becoming a gangster and the other a police officer. The movie was a huge commercial and critical success, earning several awards and accolades. It also influenced many filmmakers and actors in India and abroad, such as Quentin Tarantino, Danny Boyle, Rajkumar Hirani, and Shah Rukh Khan.

-

Deewaar in hindi torrent download


DOWNLOADhttps://byltly.com/2uKzaH



-

But how can you watch this masterpiece online if you don't have access to a DVD or a streaming service that offers it? One option that many people resort to is downloading Deewaar in hindi torrent from various websites. However, this method is not only illegal but also risky, as it can expose you to malware, viruses, legal troubles, and poor quality videos. In this article, we will tell you why you should avoid using torrent sites to watch Deewaar online, and what are some better alternatives that are safe and legal. We will also give you some tips and tricks for finding Deewaar in hindi online easily and quickly.

-

What is Deewaar and why is it a must-watch movie?

-

Before we dive into the details of how to watch Deewaar online, let's first understand what makes this movie so special and why you should watch it if you haven't already. Here are some of the reasons why Deewaar is a must-watch movie for any Bollywood lover.

-

The plot and the themes of Deewaar

-

The story of Deewaar revolves around two brothers, Vijay (Amitabh Bachchan) and Ravi (Shashi Kapoor), who grow up in poverty after their father (Satyendra Kapoor) is framed for a crime he didn't commit by a corrupt businessman (Iftekhar). Vijay becomes bitter and disillusioned with society, and joins a gang led by Samant (Madan Puri), while Ravi becomes an honest and upright police officer. The brothers clash with each other over their conflicting ideologies and loyalties, leading to a dramatic confrontation that tests their bond.

-

The movie explores various themes such as family, friendship, morality, justice, violence, class struggle, and urban decay. It also reflects the socio-political context of India in the 1970s, when the country was facing economic crisis, political unrest, labor strikes, and corruption scandals. The movie portrays the plight of the common man who is oppressed by the system and has to resort to crime or rebellion to survive. It also questions the role of law enforcement and its effectiveness in dealing with crime and corruption.

-

The cast and the crew of Deewaar

-

Deewaar boasts of an impressive cast and crew who delivered stellar performances and technical excellence. Amitabh Bachchan and Shashi Kapoor are brilliant as the two brothers who share a deep love but also a bitter rivalry. They showcase their acting range by portraying complex emotions such as anger, pain, guilt, pride, and remorse. Their chemistry is palpable and their dialogues are memorable. The movie also features other talented actors such as Nirupa Roy as the mother of Vijay and Ravi; Parveen Babi as Anita, Vijay's love interest; Neetu Singh as Veera, Ravi's love interest; Nirupa Roy as Sumitra Devi; Iftekhar as Deshmukh; Madan Puri as Samant; Sudhir as Jaichand; Jagdish Raj as Jaggi; Alankar Joshi as young Vijay; Raju Shrestha as young Ravi; Manmohan Krishna as DCP Narang; Yunus Parvez as Rahim Chacha; Raj Kishore as Darpan; Shetty as Shetty; Mac Mohan as Mac; Viju Khote as Viju; Mohan Sherry as Peter; Satyendra Kapoor as Anand Verma; Kamal Kapoor as Mr Agarwal; Rajpal Yadav as Munna Bhaiya; Ramesh Deo as Sub-Inspector Shinde; Murad as Police Commissioner.

-

The movie was directed by Yash Chopra, one of the most celebrated filmmakers in Indian cinema history. He was known for his versatility and his ability to create engaging stories across different genres such as romance, drama, thriller, action, comedy, musicals etc. He was also known for his collaboration with Amitabh Bachchan in several hit movies such as Zanjeer (1973), Kabhi Kabhie (1976), Trishul (1978), Kaala Patthar (1979), Silsila (1981), Mashaal (1984), Lamhe (1991), Veer-Zaara (2004) etc. He won six National Film Awards and 11 Filmfare Awards for his work.

-

The movie was written by Salim-Javed

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md deleted file mode 100644 index e3c4ac0bdbdd5da477d37f95298750241c266a07..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md +++ /dev/null @@ -1,24 +0,0 @@ - -

How to Download PowerPoint Presentations for EADO 2022

-

If you are planning to attend the 19th EADO Congress in Stockholm, Sweden, on May 10-13, 2022, you might be interested in downloading some PowerPoint presentations to prepare for the event. The EADO Congress is a major international meeting that brings together experts and researchers in the field of dermato-oncology, the study and treatment of skin cancers. The congress will feature keynote lectures, symposia, workshops, oral and poster presentations, and networking opportunities.

-

download powerpoint crackeado 2022


Download Zip 🌟 https://byltly.com/2uKwTT



-

There are two ways to download PowerPoint presentations for EADO 2022:

-
    -
  1. From the official website of the congress: https://eado2022.com/. Here you can find the scientific program, the abstract submission guidelines, the registration information, and the sponsors and exhibitors. You can also access some of the previous congresses' presentations by clicking on the "Past Congresses" tab and selecting the year of your interest.
  2. -
  3. From Microsoft PowerPoint: If you have a Microsoft 365 subscription, you can use PowerPoint to create your own presentations or download templates from the online library. You can also use PowerPoint on the web for free by signing in with a Microsoft account. To download PowerPoint or access it online, visit https://www.microsoft.com/en-ww/microsoft-365/powerpoint. You can search for "EADO" or "dermato-oncology" in the template gallery to find relevant designs.
  4. -
-

We hope this article helps you download PowerPoint presentations for EADO 2022. We look forward to seeing you at the congress!

Here are some more paragraphs for the article:

-

Why attend EADO 2022?

-

-

EADO 2022 is a great opportunity to learn from the leading experts in dermato-oncology, share your research and clinical experience, and network with colleagues from around the world. You will be able to update your knowledge on the latest advances and challenges in the diagnosis, prevention, and treatment of skin cancers, including melanoma, non-melanoma skin cancer, cutaneous lymphoma, and rare tumors. You will also be able to participate in interactive sessions, workshops, and debates on topics such as immunotherapy, targeted therapy, surgery, radiotherapy, dermatopathology, dermoscopy, and more.

-

How to prepare for EADO 2022?

-

To make the most of your attendance at EADO 2022, we recommend that you:

- -

We hope you enjoy EADO 2022 and have a productive and rewarding experience!

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest].md deleted file mode 100644 index b910d1dd25f5ab0861328a582e0807ca9415f1fb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest].md +++ /dev/null @@ -1,93 +0,0 @@ - -

GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest]: A Powerful Tool to Protect Your PC from Malware

-

Malware is a serious threat to your computer and your privacy. It can infect your system through various ways, such as email attachments, downloads, pop-ups, fake updates, etc. Malware can damage your files, slow down your PC, steal your personal information, monitor your online activities, and even lock your system until you pay a ransom.

-

That's why you need a reliable anti-malware solution that can detect and remove malware from your PC effectively and efficiently. One such solution is GridinSoft Anti-Malware, an impressive application that has been developed specifically for the automatic removal of viruses, bots, spyware, keyloggers, trojans, scareware, rootkits, and other malicious software.

-

GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest]


Download Zip >>> https://byltly.com/2uKzdu



-

In this article, we will show you how to download, install, activate, and use GridinSoft Anti-Malware with crack license keys 2020 [latest] to protect your PC from malware. We will also answer some frequently asked questions about GridinSoft Anti-Malware.

-

What is GridinSoft Anti-Malware?

-

GridinSoft Anti-Malware is an excellent anti-malware solution that has been designed to provide high-speed system scanning process without slowing down your PC. It has a user-friendly and simple interface that makes it easy to use for both beginners and experts.

-

Features and benefits of GridinSoft Anti-Malware

-

Some of the features and benefits of GridinSoft Anti-Malware are:

- -

Conclusion

-

Smash Hit is a game that will test your skills and reflexes as you smash glass objects with metal balls in a surreal dimension. The game has stunning graphics, realistic physics, and immersive music that will keep you hooked for hours. You can download Smash Hit from apkhere, a website that offers free Android apps and games that are not available on Google Play Store or have been modified to remove ads, unlock features, or add cheats. By downloading Smash Hit from apkhere, you can get the premium mode for free and enjoy all the benefits without spending a dime.

-

smash hit apkhere download
-smash hit apkhere mod
-smash hit apkhere premium
-smash hit apkhere free
-smash hit apkhere latest version
-smash hit apkhere hack
-smash hit apkhere unlimited balls
-smash hit apkhere full version
-smash hit apkhere android
-smash hit apkhere online
-smash hit apkhere game
-smash hit apkhere app
-smash hit apkhere review
-smash hit apkhere update
-smash hit apkhere install
-smash hit apkhere play store
-smash hit apkhere cheats
-smash hit apkhere tips
-smash hit apkhere guide
-smash hit apkhere walkthrough
-smash hit apkhere gameplay
-smash hit apkhere trailer
-smash hit apkhere video
-smash hit apkhere music
-smash hit apkhere soundtracks
-smash hit apkhere levels
-smash hit apkhere rooms
-smash hit apkhere dimensions
-smash hit apkhere graphics
-smash hit apkhere physics
-smash hit apkhere controls
-smash hit apkhere settings
-smash hit apkhere achievements
-smash hit apkhere leaderboards
-smash hit apkhere multiplayer
-smash hit apkhere co-op
-smash hit apkhere vr mode
-smash hit apkhere 3d mode
-smash hit apkhere 4k mode
-smash hit apkhere pro mode
-smash hit apkhere zen mode
-smash hit apkhere classic mode
-smash hit apkhere endless mode
-smash hit apk here mayhem mode

-

If you are looking for a fun and addictive game that will challenge your focus, concentration, and timing, you should download Smash Hit from apkhere today and start smashing glass with metal balls.

-

FAQs

-

Here are some frequently asked questions about Smash Hit:

-
    -
  1. What are the minimum requirements to play Smash Hit?
  2. -

    To play Smash Hit, you need an Android device that runs on Android 4.1 or higher, has at least 1 GB of RAM, and has at least 100 MB of free storage space.

    -
  3. How much does Smash Hit cost?
  4. -

    Smash Hit is free to download and play from Google Play Store or apkhere. However, if you want to unlock the premium mode, you need to pay $1.99 on Google Play Store or download it for free from apkhere.

    -
  5. Is Smash Hit safe to download from apkhere?
  6. -

    Apkhere is a website that offers free Android apps and games that are not available on Google Play Store or have been modified to remove ads, unlock features, or add cheats. Apkhere claims that all the apps and games on its website are safe and virus-free, but there is no guarantee that they are. Therefore, you should download Smash Hit from apkhere at your own risk and discretion.

    -
  7. How can I unlock premium mode in Smash Hit?
  8. -

    To unlock premium mode in Smash Hit, you need to pay $1.99 on Google Play Store or download it for free from apkhere. Premium mode gives you access to more levels, game modes, cloud save, statistics, and more.

    -
  9. What are some alternatives to Smash Hit?
  10. -

    If you like Smash Hit, you may also like these games:

    - -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Where to Download Green Days Time of Your Life and Other Popular Songs from the 90s.md b/spaces/congsaPfin/Manga-OCR/logs/Where to Download Green Days Time of Your Life and Other Popular Songs from the 90s.md deleted file mode 100644 index d72963ff19acf9962b128ae578c69c206dce43f7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Where to Download Green Days Time of Your Life and Other Popular Songs from the 90s.md +++ /dev/null @@ -1,128 +0,0 @@ - -

How to Download Green Day's Time of Your Life

-

Green Day's Time of Your Life is one of the most iconic songs of the 90s. It has been featured in many movies, TV shows, and events, such as Seinfeld, ER, and high school proms. It is a song that captures the nostalgia, regret, and hope of life's transitions. If you are a fan of this song, you might want to download it to your device and listen to it anytime you want. In this article, we will show you how to download Time of Your Life from different sources, such as YouTube, Spotify, and Apple Music.

-

Introduction

-

What is Time of Your Life and why is it popular?

-

Time of Your Life, also known as Good Riddance (Time of Your Life), is a song by American rock band Green Day. It was released in December 1997 as the second single from their fifth studio album, Nimrod. The song was written by lead singer Billie Joe Armstrong after his girlfriend moved to Ecuador. He named it Good Riddance to express his anger, but added Time of Your Life as a subtitle to show his acceptance.

-

download green day time of your life


Download Zip ✸✸✸ https://urlca.com/2uO809



-

The song is different from Green Day's usual punk rock style, as it features acoustic guitar and strings. It is a ballad that reflects on the memories and lessons learned in life. The lyrics are ambiguous and can be interpreted in various ways, such as a breakup, a graduation, or a farewell. The song has a catchy melody and a simple chord progression that makes it easy to sing along. The song has been praised by critics and fans alike for its emotional impact and universal appeal.

-

What are the benefits of downloading the song?

-

Downloading Time of Your Life can have many benefits for you. Here are some of them:

- -

How to download Time of Your Life from different sources

-

How to download Time of Your Life from YouTube

-

YouTube is one of the most popular platforms to watch and listen to music videos online. You can find the official music video of Time of Your Life by Green Day on YouTube, as well as many cover versions and live performances. However, YouTube does not allow you to download the videos directly to your device. You need to use a third-party website or software that can convert YouTube videos to MP3 files. Here are the steps to download Time of Your Life from YouTube:

-

Step 1: Find the official music video or a cover version on YouTube

-

Go to YouTube and search for Time of Your Life by Green Day. You can choose the official music video, which has over 300 million views, or any other version that you like. You can also filter the results by duration, quality, upload date, and more. Once you find the video that you want to download, click on it and watch it if you want.

-

Step 2: Copy the URL of the video

-

After you open the video, look at the address bar of your browser. You will see a long URL that starts with https://www.youtube.com/watch?v=. This is the link to the video that you need to copy. You can either right-click on the URL and select Copy, or highlight the URL and press Ctrl+C on your keyboard.

-

Step 3: Paste the URL into a YouTube to MP3 converter website

-

Now that you have copied the URL of the video, you need to paste it into a website that can convert YouTube videos to MP3 files. There are many websites that offer this service for free, such as ytmp3.cc, y2mate.com, flvto.biz, and more. You can choose any website that you trust and that works for you. Once you open the website, you will see a box where you can paste the URL of the video. You can either right-click on the box and select Paste, or click on the box and press Ctrl+V on your keyboard.

-

Step 4: Choose the format and quality of the audio file

-

After you paste the URL of the video, the website will analyze it and show you some options for downloading it. You can choose the format and quality of the audio file that you want to download. The most common format is MP3, which is compatible with most devices and players. The quality of the audio file depends on the bitrate, which is measured in kilobits per second (kbps). The higher the bitrate, the better the sound quality, but also the larger the file size. You can choose between different bitrates, such as 64 kbps, 128 kbps, 192 kbps, 256 kbps, or 320 kbps. The default option is usually 128 kbps, which is good enough for most purposes.

-

download green day good riddance mp3
-download green day time of your life lyrics
-download green day time of your life video
-download green day time of your life acoustic
-download green day time of your life instrumental
-download green day time of your life live
-download green day time of your life guitar tab
-download green day time of your life karaoke
-download green day time of your life soundcloud
-download green day time of your life archive.org
-download green day time of your life piano sheet music
-download green day time of your life remix
-download green day time of your life cover
-download green day time of your life ukulele chords
-download green day time of your life ringtone
-download green day time of your life graduation song
-download green day time of your life seinfeld version
-download green day time of your life 320kbps
-download green day time of your life spotify
-download green day time of your life youtube
-download green day good riddance (time of your life) official music video
-download green day good riddance (time of your life) album version
-download green day good riddance (time of your life) radio edit
-download green day good riddance (time of your life) guitar lesson
-download green day good riddance (time of your life) violin sheet music
-download green day good riddance (time of your life) meaning
-download green day good riddance (time of your life) chords and lyrics
-download green day good riddance (time of your life) backing track
-download green day good riddance (time of your life) midi file
-download green day good riddance (time of your life) unplugged
-download green day good riddance (time of your life) reprise records release date 1997
-download green day good riddance (time of your life) alternative genre
-download green day good riddance (time of your life) written by billie joe armstrong
-download green day good riddance (time of your life) produced by rob cavallo
-download green day good riddance (time of your life) recorded at studio 880
-download green day good riddance (time of your life) nominated for grammy award for best rock song
-download green day good riddance (time of your life) featured in movies and tv shows
-download green day good riddance (time of your life) sampled by other artists
-download green day good riddance (time of your life) performed at woodstock 1994
-download green day good riddance (time of your life) inspired by bob dylan
-how to play green day time of your life on guitar
-how to sing green day time of your life
-how to tune guitar for green day time of your life
-how to strum green day time of your life
-how to read music for green day time of your life
-how to play green day time of your life on piano
-how to play green day time of your life on ukulele
-how to play green day time of your life on violin
-how to play green day time of your life on drums

-

Step 5: Download the file to your device

-

Once you choose the format and quality of the audio file, you can click on the Download button or icon on the website. The website will start converting the video to the audio file and then download it to your device. Depending on the website, you might see a progress bar or a countdown timer that shows you how long the conversion and download will take. You might also see a pop-up window or a new tab that opens when you click on the Download button. You can close them if they are not necessary. When the download is complete, you will see a notification or a message that tells you where the file is saved on your device. You can also choose to rename the file or change its location if you want.

-

How to download Time of Your Life from Spotify

-

Spotify is one of the most popular streaming services for music and podcasts. You can find Time of Your Life by Green Day on Spotify, as well as many other songs and albums by the band. However, Spotify does not allow you to download the songs directly to your device. You need to have a Spotify account and a premium subscription that costs $9.99 per month. With a premium subscription, you can download up to 10,000 songs on five different devices and listen to them offline. Here are the steps to download Time of Your Life from Spotify:

-

Step 1: Sign up for a Spotify account or log in to your existing one

-

If you don't have a Spotify account, you can sign up for one for free on their website or app. You will need to provide your email address, password, username, date of birth, and gender. You can also sign up with your Facebook or Apple account if you prefer. If you already have a Spotify account, you can log in with your email address, username, or Facebook or Apple account.

-

Step 2: Search for Time of Your Life by Green Day on Spotify

-

Once you are logged in to your Spotify account, you can search for Time of Your Life by Green Day on their website or app. You can type the name of the song or the band in the search bar and hit Enter. You will see a list of results that match your query. You can filter the results by songs, albums, artists, playlists, podcasts, and more. You can also use voice search if you are using the app on your phone or tablet. Once you find the song that you want to download, click on it and play it if you want.

-

Step 3: Add the song to your library or a playlist

-

After you open the song, you will see some options below it, such as Like, Add to Queue, Share, and More. To download the song, you need to add it to your library or a playlist first. You can do this by clicking on the Like button, which will save the song to your Liked Songs list in your library. You can also click on the More button and select Add to Playlist, which will let you choose an existing playlist or create a new one where you can add the song.

-

Step 4: Enable the offline mode on your device

-

To download the song to your device, you need to enable the offline mode on your device first. This will allow you to listen to the songs that you have downloaded without using internet connection or data. To enable the offline mode on your device, go to Settings and look for Playback or Offline Mode options. Toggle the switch or check the box to turn on the offline mode. You will see a green icon or a message that indicates that you are in offline mode.

-

Step 5: Download the song to your device

-

Once you have enabled the offline mode on your device, you can download the song to your device. To download the song, go to your library or playlist where you have added the song. You will see a download icon or button next to the song or the playlist. Click on the download icon or button to start downloading the song to your device. You will see a progress bar or a check mark that shows you how much of the song or the playlist has been downloaded. When the download is complete, you will see a green icon or a message that indicates that the song or the playlist is available offline.

-

How to download Time of Your Life from Apple Music

-

Apple Music is another popular streaming service for music and podcasts. You can find Time of Your Life by Green Day on Apple Music, as well as many other songs and albums by the band. However, Apple Music does not allow you to download the songs directly to your device. You need to have an Apple Music account and a subscription that costs $9.99 per month. With a subscription, you can download up to 100,000 songs on up to 10 devices and listen to them offline. Here are the steps to download Time of Your Life from Apple Music:

-

Step 1: Sign up for an Apple Music account or log in to your existing one

-

If you don't have an Apple Music account, you can sign up for one for free on their website or app. You will need to provide your Apple ID, password, and payment method. You can also sign up with your iTunes account if you have one. If you already have an Apple Music account, you can log in with your Apple ID, password, or iTunes account.

-

Step 2: Search for Time of Your Life by Green Day on Apple Music

-

Once you are logged in to your Apple Music account, you can search for Time of Your Life by Green Day on their website or app. You can type the name of the song or the band in the search bar and hit Enter. You will see a list of results that match your query. You can filter the results by songs, albums, artists, playlists, stations, and more. You can also use Siri or voice control if you are using the app on your iPhone, iPad, or iPod touch. Once you find the song that you want to download, click on it and play it if you want.

-

Step 3: Add the song to your library or a playlist

-

After you open the song, you will see some options below it, such as Add, Play Next, Share, and More. To download the song, you need to add it to your library or a playlist first. You can do this by clicking on the Add button, which will save the song to your library. You can also click on the More button and select Add to a Playlist, which will let you choose an existing playlist or create a new one where you can add the song.

-

Step 4: Turn on the sync library option on your device

-

To download the song to your device, you need to turn on the sync library option on your device first. This will allow you to sync your music library across all your devices and access them offline. To turn on the sync library option on your device, go to Settings and look for Music or Sync Library options. Toggle the switch or check the box to turn on the sync library option. You will see a green icon or a message that indicates that you have turned on the sync library option.

-

Step 5: Download the song to your device

-

Once you have turned on the sync library option on your device, you can download the song to your device. To download the song, go to your library or playlist where you have added the song. You will see a cloud icon or button next to the song or the playlist. Click on the cloud icon or button to start downloading the song to your device. You will see a progress bar or a check mark that shows you how much of the song or the playlist has been downloaded. When the download is complete, you will see a blue icon or a message that indicates that the song or the playlist is available offline.

-

Conclusion

-

Summary of the main points

-

In this article, we have shown you how to download Time of Your Life by Green Day from different sources, such as YouTube, Spotify, and Apple Music. We have explained what Time of Your Life is and why it is popular, and what are the benefits of downloading the song. We have also provided you with step-by-step instructions and screenshots for each source. We hope that this article has been helpful and informative for you.

-

Call to action and recommendations

-

If you are a fan of Time of Your Life by Green Day, we encourage you to download the song to your device and enjoy it anytime you want. You can also check out other songs and albums by Green Day on these sources, as well as other artists and genres that you might like. You can also share your thoughts and opinions about Time of Your Life by Green Day with us in the comments section below. We would love to hear from you!

-

FAQs

-

Q: Is it legal to download Time of Your Life by Green Day from these sources?

-

A: It depends on the source and the country where you live. Generally, it is legal to download Time of Your Life by Green Day from these sources if you have a valid account and subscription, and if you use the downloaded file for personal and non-commercial purposes only. However, some countries may have different laws and regulations regarding downloading music online, so you should check them before downloading anything.

-

Q: How long does it take to download Time of Your Life by Green Day from these sources?

-

A: It depends on the source, the quality, and the speed of your internet connection. Generally, it takes a few seconds to a few minutes to download Time of Your Life by Green Day from these sources. However, some factors may affect the download time, such as network congestion, server issues, or device performance.

-

Q: How much space does Time of Your Life by Green Day take on my device?

-

A: It depends on the format and quality of the audio file that you download. Generally, an MP3 file with 128 kbps bitrate takes about 4 MB of space per minute of audio. Therefore, Time of Your Life by Green Day, which is about 2 minutes and 34 seconds long, would take about 10 MB of space on your device.

-

Q: How can I delete Time of Your Life by Green Day from my device?

-

A: It depends on the device and the source that you used to download it. Generally, you can delete Time of Your Life by Green Day from your device by going to your library or playlist where you have downloaded it, and clicking on the delete icon or button next to it. You can also delete it from your device's storage settings or file manager app.

-

Q: What are some other songs similar to Time of Your Life by Green Day?

-

A: If you like Time of Your Life by Green Day, you might also like some other songs that are similar in style, theme, or mood. Here are some examples:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Gramsevak Question Paper Maharashtra Pdf Free.md b/spaces/contluForse/HuggingGPT/Gramsevak Question Paper Maharashtra Pdf Free.md deleted file mode 100644 index 83ac466cf53bd98d7b0e2e94d3e1ca0d8ec2b846..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Gramsevak Question Paper Maharashtra Pdf Free.md +++ /dev/null @@ -1,68 +0,0 @@ -## Gramsevak Question Paper Maharashtra Pdf Free - - - - - - - - - -**CLICK HERE >> [https://urluso.com/2txV2G](https://urluso.com/2txV2G)** - - - - - - - - - - - - Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Gramsevak Question Paper Maharashtra Pdf Free": - -# Gramsevak Question Paper Maharashtra Pdf Free: How to Download and Prepare for Gramsevak Bharti 2023 - - - -If you are looking for **Gramsevak Question Paper Maharashtra Pdf Free**, then you have come to the right place. In this article, we will tell you how to download and prepare for Gramsevak Bharti 2023, which is a recruitment exam for the post of Gramsevak in Maharashtra Rural Development Department. - - - -Gramsevak is a village-level officer who is responsible for implementing various schemes and programs of the government in rural areas. Gramsevak Bharti 2023 is expected to be conducted in May 2023 by Zilla Parishad in Maharashtra for filling up around 13400 posts[^2^]. To qualify for this exam, you need to have a bachelor's degree in any discipline from a recognized university and be between 18 to 38 years of age. - - - -To crack Gramsevak Bharti 2023, you need to prepare well for the written test, which will consist of 200 multiple-choice questions on topics such as General Knowledge, Mathematics, English, Marathi, and Rural Development. The duration of the test will be two hours and the total marks will be 200. There will be no negative marking for wrong answers. - - - -One of the best ways to prepare for Gramsevak Bharti 2023 is to solve previous year question papers of Gramsevak exam. This will help you to understand the pattern, difficulty level, and syllabus of the exam. It will also help you to improve your speed, accuracy, and time management skills. - - - -Fortunately, you can download **Gramsevak Question Paper Maharashtra Pdf Free** from various online sources. One such source is [Mazasarav.com](https://mazasarav.com/gramsevak-bharti-question-papers/), which provides Gramsevak Bharti question papers PDF for free[^1^]. You can also find other useful resources such as syllabus, books, mock tests, and tips on this website. - - - -So, what are you waiting for? Download **Gramsevak Question Paper Maharashtra Pdf Free** today and start your preparation for Gramsevak Bharti 2023. We wish you all the best! - -Here is a possible continuation of the article: - -If you want to know more about Gramsevak Bharti 2023, you can visit the official website of Maharashtra Rural Development Department at [https://rdd.maharashtra.gov.in/](https://rdd.maharashtra.gov.in/). Here you can find the latest updates, notifications, and guidelines regarding Gramsevak Bharti 2023. You can also check the eligibility criteria, application process, selection process, and salary details of Gramsevak post. - - - -Gramsevak Bharti 2023 is a golden opportunity for those who want to serve the rural population of Maharashtra and contribute to their development. Gramsevak is a challenging and rewarding job that requires dedication, hard work, and passion. If you have these qualities, then you should not miss this chance to apply for Gramsevak Bharti 2023. - - - -To summarize, **Gramsevak Question Paper Maharashtra Pdf Free** is a valuable resource for preparing for Gramsevak Bharti 2023. You can download it from [Mazasarav.com](https://mazasarav.com/gramsevak-bharti-question-papers/) and practice it regularly. You can also refer to other online sources for more study material and guidance. Remember, practice makes perfect. So, practice as much as you can and ace Gramsevak Bharti 2023. - - dfd1c89656 - - - - - diff --git a/spaces/contluForse/HuggingGPT/assets/Crack Pigc Condominio Los Pinos 88.md b/spaces/contluForse/HuggingGPT/assets/Crack Pigc Condominio Los Pinos 88.md deleted file mode 100644 index dc1e641d8e0efcd9afcddf706632301a3c6eecbb..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Crack Pigc Condominio Los Pinos 88.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

Expertsoft ASN Explorer Download Full Free
Hotmail Sign In For Free With 30 Day Trial
Hotmail Sign In For Free With 30 Day Trial
free trial of gmail
virtual sokets kun registreren yt
rodeo music 8.4 for win 5 crack
Campo de Deportes de Mollet Palma
Rage Against the Machine 2009 full movie download
pidgin 2.7.6 login serial number
Naruto VI Gekitou 12 Episode 1 Eng Sub
free winrar 5.75
Free download of extra anxiety 2008
Shark Lagoon Amandas Therapy Walkthrought.PDF
Reverse.com Mobile Browser Password
Refree Softphone 3.5.6 Free Trial Version
motosuite pro 4.0.0.2 x86

-

crack pigc condominio los pinos 88


Download File 🗸 https://ssurll.com/2uzxfc



-

Free 3gp mp4 video converter portable
Creation(Youtube) 4.0 BETA Version with crack
Auto It 3 Crack
avast antispam 2009 serial keygen
Virtual sokets kun registreren yt
Spongebob Squarepants Zip Game Game V1.1 Full
Dv8 0.1.6.2. Incl.Keygen-klk
Tenchynet Downloader for IPhone
Etherape.Etherape.Etherape.ESP

-

Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen Crackl
Tcwin 45 for windows 7 32bit.rar
Tcwin 45 for windows 7 32bit.rar
Rapidshare free crack installer mac
TBC Waterproof Jd Driver
Android Pogo Game Download
Games Download IDM 5.0.45 Pro Serial Key
Age of Empires II: Legendary Edition Patch Download
WA TP
DIAL MINI 5 Crack
FiberMax Pro 2.00.82.Incl.Keygen-h8kek

-

The Mob 1.1.1.8 Crack [Crack Team]
fluffy lion app.rar
gtag365 v2.5.1 Torrent
John deere 4818 model xj driver
My Games on Crack PC 6.9.0.1339
salvador 0.6 crack
I was running this as a windows service and it could not find the folder, even though it was created before the service started up. Was unable to locate info how to solve it either. so I just stopped it and that was that. I have since reinstalled the service and it worked like a charm.dynapaper 0.4.6 Crack

899543212b
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/F-16 Multirole Fighter Crack Ful Experience the Realism of Air Combat.md b/spaces/contluForse/HuggingGPT/assets/F-16 Multirole Fighter Crack Ful Experience the Realism of Air Combat.md deleted file mode 100644 index 11bb89782454f930911d76e83009255465e2fd31..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/F-16 Multirole Fighter Crack Ful Experience the Realism of Air Combat.md +++ /dev/null @@ -1,6 +0,0 @@ -

F-16 Multirole Fighter Crack Ful


Download Filehttps://ssurll.com/2uzwCh



- - aaccfb2cb3
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/__init__.py deleted file mode 100644 index dca2f09405330743c476e190896bee39c45498ea..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .base import BaseSegmentor -from .cascade_encoder_decoder import CascadeEncoderDecoder -from .encoder_decoder import EncoderDecoder - -__all__ = ['BaseSegmentor', 'EncoderDecoder', 'CascadeEncoderDecoder'] diff --git a/spaces/cyllum/soccertwos-analytics/Dockerfile b/spaces/cyllum/soccertwos-analytics/Dockerfile deleted file mode 100644 index 3c4e51c9146756a8f151137ff15c8d0aa33e7823..0000000000000000000000000000000000000000 --- a/spaces/cyllum/soccertwos-analytics/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM python:3.11-slim - -WORKDIR /app - -COPY ./requirements.txt /app/requirements.txt - -RUN pip3 install --no-cache-dir -r /app/requirements.txt - -# User -RUN useradd -m -u 1000 user -USER user -ENV HOME /home/user -ENV PATH $HOME/.local/bin:$PATH - -WORKDIR $HOME -RUN mkdir app -WORKDIR $HOME/app -COPY . $HOME/app - -EXPOSE 8501 -CMD streamlit run app.py diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/utils/__init__.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/daddyjin/TalkingFaceGeneration/app.py b/spaces/daddyjin/TalkingFaceGeneration/app.py deleted file mode 100644 index bce4f978f321e8b8ae78a9693e38e441c2a7498c..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -from FONT.gradio_demo import FONT -from Demo_TFR_Pirenderer.gradio_demo import OPT - -def tfg_FONT(source_img, driving_audio): - tfg = FONT() - video_path = tfg.test(source_image_path=source_img, driving_audio_path=driving_audio) - return video_path - -tfg_FONT_demo = gr.Interface(fn=tfg_FONT, inputs=[gr.Image(type="filepath"), gr.Audio(type="filepath")], - outputs=gr.Video(include_audio=True, height=256, width=256), - examples=[["./example/images/60.png", "./example/audios/6343252661930009508_00092.wav"], - ["./example/images/7.png", "./example/audios/6350921755403330389_00056.wav"]]) - -def tfg_OPT(source_img, driving_audio): - tfg = OPT(checkpoint_path='./Demo_TFR_Pirenderer/checkpoints', config_path='./Demo_TFR_Pirenderer/src/config') - video_path = tfg.test(source_image=source_img, driven_audio=driving_audio) - return video_path - -tfr_OPT_demo = gr.Interface(fn=tfg_OPT, inputs=[gr.Image(type="filepath"), gr.Audio(type="filepath")], - outputs=gr.Video(include_audio=True, height=256, width=256), - examples=[["./example/images/60.png", "./example/audios/6343252661930009508_00092.wav"], - ["./example/images/7.png", "./example/audios/6350921755403330389_00056.wav"]]) - - -demo = gr.TabbedInterface([tfr_OPT_demo, tfg_FONT_demo], ["PIRenderer_based_Talking_Face_Generation", "FOMM_based_Talking_Face_Generation"]) - - -info = demo.launch() -print(info) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/_magics.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/_magics.py deleted file mode 100644 index 7fe6131182952ff30bf63543de528657f7ba77a2..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/_magics.py +++ /dev/null @@ -1,109 +0,0 @@ -""" -Magic functions for rendering vega-lite specifications -""" -__all__ = ["vegalite"] - -import json -import warnings - -import IPython -from IPython.core import magic_arguments -import pandas as pd -from toolz import curried - -from altair.vegalite import v5 as vegalite_v5 - -try: - import yaml - - YAML_AVAILABLE = True -except ImportError: - YAML_AVAILABLE = False - - -RENDERERS = { - "vega-lite": { - "5": vegalite_v5.VegaLite, - }, -} - - -TRANSFORMERS = { - "vega-lite": { - "5": vegalite_v5.data_transformers, - }, -} - - -def _prepare_data(data, data_transformers): - """Convert input data to data for use within schema""" - if data is None or isinstance(data, dict): - return data - elif isinstance(data, pd.DataFrame): - return curried.pipe(data, data_transformers.get()) - elif isinstance(data, str): - return {"url": data} - else: - warnings.warn("data of type {} not recognized".format(type(data)), stacklevel=1) - return data - - -def _get_variable(name): - """Get a variable from the notebook namespace.""" - ip = IPython.get_ipython() - if ip is None: - raise ValueError( - "Magic command must be run within an IPython " - "environemnt, in which get_ipython() is defined." - ) - if name not in ip.user_ns: - raise NameError( - "argument '{}' does not match the " - "name of any defined variable".format(name) - ) - return ip.user_ns[name] - - -@magic_arguments.magic_arguments() -@magic_arguments.argument( - "data", - nargs="?", - help="local variablename of a pandas DataFrame to be used as the dataset", -) -@magic_arguments.argument("-v", "--version", dest="version", default="v5") -@magic_arguments.argument("-j", "--json", dest="json", action="store_true") -def vegalite(line, cell): - """Cell magic for displaying vega-lite visualizations in CoLab. - - %%vegalite [dataframe] [--json] [--version='v5'] - - Visualize the contents of the cell using Vega-Lite, optionally - specifying a pandas DataFrame object to be used as the dataset. - - if --json is passed, then input is parsed as json rather than yaml. - """ - args = magic_arguments.parse_argstring(vegalite, line) - existing_versions = {"v5": "5"} - version = existing_versions[args.version] - assert version in RENDERERS["vega-lite"] - VegaLite = RENDERERS["vega-lite"][version] - data_transformers = TRANSFORMERS["vega-lite"][version] - - if args.json: - spec = json.loads(cell) - elif not YAML_AVAILABLE: - try: - spec = json.loads(cell) - except json.JSONDecodeError as err: - raise ValueError( - "%%vegalite: spec is not valid JSON. " - "Install pyyaml to parse spec as yaml" - ) from err - else: - spec = yaml.load(cell, Loader=yaml.SafeLoader) - - if args.data is not None: - data = _get_variable(args.data) - spec["data"] = _prepare_data(data, data_transformers) - - return VegaLite(spec) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/varLib/merger.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/varLib/merger.py deleted file mode 100644 index a4db492b75f3735115d3db950387a9f6e2c7d5d0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/varLib/merger.py +++ /dev/null @@ -1,1699 +0,0 @@ -""" -Merge OpenType Layout tables (GDEF / GPOS / GSUB). -""" -import os -import copy -import enum -from operator import ior -import logging -from fontTools.colorLib.builder import MAX_PAINT_COLR_LAYER_COUNT, LayerReuseCache -from fontTools.misc import classifyTools -from fontTools.misc.roundTools import otRound -from fontTools.misc.treeTools import build_n_ary_tree -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables import otBase as otBase -from fontTools.ttLib.tables.otConverters import BaseFixedValue -from fontTools.ttLib.tables.otTraverse import dfs_base_table -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.varLib import builder, models, varStore -from fontTools.varLib.models import nonNone, allNone, allEqual, allEqualTo, subList -from fontTools.varLib.varStore import VarStoreInstancer -from functools import reduce -from fontTools.otlLib.builder import buildSinglePos -from fontTools.otlLib.optimize.gpos import ( - _compression_level_from_env, - compact_pair_pos, -) - -log = logging.getLogger("fontTools.varLib.merger") - -from .errors import ( - ShouldBeConstant, - FoundANone, - MismatchedTypes, - NotANone, - LengthsDiffer, - KeysDiffer, - InconsistentGlyphOrder, - InconsistentExtensions, - InconsistentFormats, - UnsupportedFormat, - VarLibMergeError, -) - - -class Merger(object): - def __init__(self, font=None): - self.font = font - # mergeTables populates this from the parent's master ttfs - self.ttfs = None - - @classmethod - def merger(celf, clazzes, attrs=(None,)): - assert celf != Merger, "Subclass Merger instead." - if "mergers" not in celf.__dict__: - celf.mergers = {} - if type(clazzes) in (type, enum.EnumMeta): - clazzes = (clazzes,) - if type(attrs) == str: - attrs = (attrs,) - - def wrapper(method): - assert method.__name__ == "merge" - done = [] - for clazz in clazzes: - if clazz in done: - continue # Support multiple names of a clazz - done.append(clazz) - mergers = celf.mergers.setdefault(clazz, {}) - for attr in attrs: - assert attr not in mergers, ( - "Oops, class '%s' has merge function for '%s' defined already." - % (clazz.__name__, attr) - ) - mergers[attr] = method - return None - - return wrapper - - @classmethod - def mergersFor(celf, thing, _default={}): - typ = type(thing) - - for celf in celf.mro(): - mergers = getattr(celf, "mergers", None) - if mergers is None: - break - - m = celf.mergers.get(typ, None) - if m is not None: - return m - - return _default - - def mergeObjects(self, out, lst, exclude=()): - if hasattr(out, "ensureDecompiled"): - out.ensureDecompiled(recurse=False) - for item in lst: - if hasattr(item, "ensureDecompiled"): - item.ensureDecompiled(recurse=False) - keys = sorted(vars(out).keys()) - if not all(keys == sorted(vars(v).keys()) for v in lst): - raise KeysDiffer( - self, expected=keys, got=[sorted(vars(v).keys()) for v in lst] - ) - mergers = self.mergersFor(out) - defaultMerger = mergers.get("*", self.__class__.mergeThings) - try: - for key in keys: - if key in exclude: - continue - value = getattr(out, key) - values = [getattr(table, key) for table in lst] - mergerFunc = mergers.get(key, defaultMerger) - mergerFunc(self, value, values) - except VarLibMergeError as e: - e.stack.append("." + key) - raise - - def mergeLists(self, out, lst): - if not allEqualTo(out, lst, len): - raise LengthsDiffer(self, expected=len(out), got=[len(x) for x in lst]) - for i, (value, values) in enumerate(zip(out, zip(*lst))): - try: - self.mergeThings(value, values) - except VarLibMergeError as e: - e.stack.append("[%d]" % i) - raise - - def mergeThings(self, out, lst): - if not allEqualTo(out, lst, type): - raise MismatchedTypes( - self, expected=type(out).__name__, got=[type(x).__name__ for x in lst] - ) - mergerFunc = self.mergersFor(out).get(None, None) - if mergerFunc is not None: - mergerFunc(self, out, lst) - elif isinstance(out, enum.Enum): - # need to special-case Enums as have __dict__ but are not regular 'objects', - # otherwise mergeObjects/mergeThings get trapped in a RecursionError - if not allEqualTo(out, lst): - raise ShouldBeConstant(self, expected=out, got=lst) - elif hasattr(out, "__dict__"): - self.mergeObjects(out, lst) - elif isinstance(out, list): - self.mergeLists(out, lst) - else: - if not allEqualTo(out, lst): - raise ShouldBeConstant(self, expected=out, got=lst) - - def mergeTables(self, font, master_ttfs, tableTags): - for tag in tableTags: - if tag not in font: - continue - try: - self.ttfs = master_ttfs - self.mergeThings(font[tag], [m.get(tag) for m in master_ttfs]) - except VarLibMergeError as e: - e.stack.append(tag) - raise - - -# -# Aligning merger -# -class AligningMerger(Merger): - pass - - -@AligningMerger.merger(ot.GDEF, "GlyphClassDef") -def merge(merger, self, lst): - if self is None: - if not allNone(lst): - raise NotANone(merger, expected=None, got=lst) - return - - lst = [l.classDefs for l in lst] - self.classDefs = {} - # We only care about the .classDefs - self = self.classDefs - - allKeys = set() - allKeys.update(*[l.keys() for l in lst]) - for k in allKeys: - allValues = nonNone(l.get(k) for l in lst) - if not allEqual(allValues): - raise ShouldBeConstant( - merger, expected=allValues[0], got=lst, stack=["." + k] - ) - if not allValues: - self[k] = None - else: - self[k] = allValues[0] - - -def _SinglePosUpgradeToFormat2(self): - if self.Format == 2: - return self - - ret = ot.SinglePos() - ret.Format = 2 - ret.Coverage = self.Coverage - ret.ValueFormat = self.ValueFormat - ret.Value = [self.Value for _ in ret.Coverage.glyphs] - ret.ValueCount = len(ret.Value) - - return ret - - -def _merge_GlyphOrders(font, lst, values_lst=None, default=None): - """Takes font and list of glyph lists (must be sorted by glyph id), and returns - two things: - - Combined glyph list, - - If values_lst is None, return input glyph lists, but padded with None when a glyph - was missing in a list. Otherwise, return values_lst list-of-list, padded with None - to match combined glyph lists. - """ - if values_lst is None: - dict_sets = [set(l) for l in lst] - else: - dict_sets = [{g: v for g, v in zip(l, vs)} for l, vs in zip(lst, values_lst)] - combined = set() - combined.update(*dict_sets) - - sortKey = font.getReverseGlyphMap().__getitem__ - order = sorted(combined, key=sortKey) - # Make sure all input glyphsets were in proper order - if not all(sorted(vs, key=sortKey) == vs for vs in lst): - raise InconsistentGlyphOrder() - del combined - - paddedValues = None - if values_lst is None: - padded = [ - [glyph if glyph in dict_set else default for glyph in order] - for dict_set in dict_sets - ] - else: - assert len(lst) == len(values_lst) - padded = [ - [dict_set[glyph] if glyph in dict_set else default for glyph in order] - for dict_set in dict_sets - ] - return order, padded - - -@AligningMerger.merger(otBase.ValueRecord) -def merge(merger, self, lst): - # Code below sometimes calls us with self being - # a new object. Copy it from lst and recurse. - self.__dict__ = lst[0].__dict__.copy() - merger.mergeObjects(self, lst) - - -@AligningMerger.merger(ot.Anchor) -def merge(merger, self, lst): - # Code below sometimes calls us with self being - # a new object. Copy it from lst and recurse. - self.__dict__ = lst[0].__dict__.copy() - merger.mergeObjects(self, lst) - - -def _Lookup_SinglePos_get_effective_value(merger, subtables, glyph): - for self in subtables: - if ( - self is None - or type(self) != ot.SinglePos - or self.Coverage is None - or glyph not in self.Coverage.glyphs - ): - continue - if self.Format == 1: - return self.Value - elif self.Format == 2: - return self.Value[self.Coverage.glyphs.index(glyph)] - else: - raise UnsupportedFormat(merger, subtable="single positioning lookup") - return None - - -def _Lookup_PairPos_get_effective_value_pair( - merger, subtables, firstGlyph, secondGlyph -): - for self in subtables: - if ( - self is None - or type(self) != ot.PairPos - or self.Coverage is None - or firstGlyph not in self.Coverage.glyphs - ): - continue - if self.Format == 1: - ps = self.PairSet[self.Coverage.glyphs.index(firstGlyph)] - pvr = ps.PairValueRecord - for rec in pvr: # TODO Speed up - if rec.SecondGlyph == secondGlyph: - return rec - continue - elif self.Format == 2: - klass1 = self.ClassDef1.classDefs.get(firstGlyph, 0) - klass2 = self.ClassDef2.classDefs.get(secondGlyph, 0) - return self.Class1Record[klass1].Class2Record[klass2] - else: - raise UnsupportedFormat(merger, subtable="pair positioning lookup") - return None - - -@AligningMerger.merger(ot.SinglePos) -def merge(merger, self, lst): - self.ValueFormat = valueFormat = reduce(int.__or__, [l.ValueFormat for l in lst], 0) - if not (len(lst) == 1 or (valueFormat & ~0xF == 0)): - raise UnsupportedFormat(merger, subtable="single positioning lookup") - - # If all have same coverage table and all are format 1, - coverageGlyphs = self.Coverage.glyphs - if all(v.Format == 1 for v in lst) and all( - coverageGlyphs == v.Coverage.glyphs for v in lst - ): - self.Value = otBase.ValueRecord(valueFormat, self.Value) - if valueFormat != 0: - # If v.Value is None, it means a kerning of 0; we want - # it to participate in the model still. - # https://github.com/fonttools/fonttools/issues/3111 - merger.mergeThings( - self.Value, - [v.Value if v.Value is not None else otBase.ValueRecord() for v in lst], - ) - self.ValueFormat = self.Value.getFormat() - return - - # Upgrade everything to Format=2 - self.Format = 2 - lst = [_SinglePosUpgradeToFormat2(v) for v in lst] - - # Align them - glyphs, padded = _merge_GlyphOrders( - merger.font, [v.Coverage.glyphs for v in lst], [v.Value for v in lst] - ) - - self.Coverage.glyphs = glyphs - self.Value = [otBase.ValueRecord(valueFormat) for _ in glyphs] - self.ValueCount = len(self.Value) - - for i, values in enumerate(padded): - for j, glyph in enumerate(glyphs): - if values[j] is not None: - continue - # Fill in value from other subtables - # Note!!! This *might* result in behavior change if ValueFormat2-zeroedness - # is different between used subtable and current subtable! - # TODO(behdad) Check and warn if that happens? - v = _Lookup_SinglePos_get_effective_value( - merger, merger.lookup_subtables[i], glyph - ) - if v is None: - v = otBase.ValueRecord(valueFormat) - values[j] = v - - merger.mergeLists(self.Value, padded) - - # Merge everything else; though, there shouldn't be anything else. :) - merger.mergeObjects( - self, lst, exclude=("Format", "Coverage", "Value", "ValueCount", "ValueFormat") - ) - self.ValueFormat = reduce( - int.__or__, [v.getEffectiveFormat() for v in self.Value], 0 - ) - - -@AligningMerger.merger(ot.PairSet) -def merge(merger, self, lst): - # Align them - glyphs, padded = _merge_GlyphOrders( - merger.font, - [[v.SecondGlyph for v in vs.PairValueRecord] for vs in lst], - [vs.PairValueRecord for vs in lst], - ) - - self.PairValueRecord = pvrs = [] - for glyph in glyphs: - pvr = ot.PairValueRecord() - pvr.SecondGlyph = glyph - pvr.Value1 = ( - otBase.ValueRecord(merger.valueFormat1) if merger.valueFormat1 else None - ) - pvr.Value2 = ( - otBase.ValueRecord(merger.valueFormat2) if merger.valueFormat2 else None - ) - pvrs.append(pvr) - self.PairValueCount = len(self.PairValueRecord) - - for i, values in enumerate(padded): - for j, glyph in enumerate(glyphs): - # Fill in value from other subtables - v = ot.PairValueRecord() - v.SecondGlyph = glyph - if values[j] is not None: - vpair = values[j] - else: - vpair = _Lookup_PairPos_get_effective_value_pair( - merger, merger.lookup_subtables[i], self._firstGlyph, glyph - ) - if vpair is None: - v1, v2 = None, None - else: - v1 = getattr(vpair, "Value1", None) - v2 = getattr(vpair, "Value2", None) - v.Value1 = ( - otBase.ValueRecord(merger.valueFormat1, src=v1) - if merger.valueFormat1 - else None - ) - v.Value2 = ( - otBase.ValueRecord(merger.valueFormat2, src=v2) - if merger.valueFormat2 - else None - ) - values[j] = v - del self._firstGlyph - - merger.mergeLists(self.PairValueRecord, padded) - - -def _PairPosFormat1_merge(self, lst, merger): - assert allEqual( - [l.ValueFormat2 == 0 for l in lst if l.PairSet] - ), "Report bug against fonttools." - - # Merge everything else; makes sure Format is the same. - merger.mergeObjects( - self, - lst, - exclude=("Coverage", "PairSet", "PairSetCount", "ValueFormat1", "ValueFormat2"), - ) - - empty = ot.PairSet() - empty.PairValueRecord = [] - empty.PairValueCount = 0 - - # Align them - glyphs, padded = _merge_GlyphOrders( - merger.font, - [v.Coverage.glyphs for v in lst], - [v.PairSet for v in lst], - default=empty, - ) - - self.Coverage.glyphs = glyphs - self.PairSet = [ot.PairSet() for _ in glyphs] - self.PairSetCount = len(self.PairSet) - for glyph, ps in zip(glyphs, self.PairSet): - ps._firstGlyph = glyph - - merger.mergeLists(self.PairSet, padded) - - -def _ClassDef_invert(self, allGlyphs=None): - if isinstance(self, dict): - classDefs = self - else: - classDefs = self.classDefs if self and self.classDefs else {} - m = max(classDefs.values()) if classDefs else 0 - - ret = [] - for _ in range(m + 1): - ret.append(set()) - - for k, v in classDefs.items(): - ret[v].add(k) - - # Class-0 is special. It's "everything else". - if allGlyphs is None: - ret[0] = None - else: - # Limit all classes to glyphs in allGlyphs. - # Collect anything without a non-zero class into class=zero. - ret[0] = class0 = set(allGlyphs) - for s in ret[1:]: - s.intersection_update(class0) - class0.difference_update(s) - - return ret - - -def _ClassDef_merge_classify(lst, allGlyphses=None): - self = ot.ClassDef() - self.classDefs = classDefs = {} - allGlyphsesWasNone = allGlyphses is None - if allGlyphsesWasNone: - allGlyphses = [None] * len(lst) - - classifier = classifyTools.Classifier() - for classDef, allGlyphs in zip(lst, allGlyphses): - sets = _ClassDef_invert(classDef, allGlyphs) - if allGlyphs is None: - sets = sets[1:] - classifier.update(sets) - classes = classifier.getClasses() - - if allGlyphsesWasNone: - classes.insert(0, set()) - - for i, classSet in enumerate(classes): - if i == 0: - continue - for g in classSet: - classDefs[g] = i - - return self, classes - - -def _PairPosFormat2_align_matrices(self, lst, font, transparent=False): - matrices = [l.Class1Record for l in lst] - - # Align first classes - self.ClassDef1, classes = _ClassDef_merge_classify( - [l.ClassDef1 for l in lst], [l.Coverage.glyphs for l in lst] - ) - self.Class1Count = len(classes) - new_matrices = [] - for l, matrix in zip(lst, matrices): - nullRow = None - coverage = set(l.Coverage.glyphs) - classDef1 = l.ClassDef1.classDefs - class1Records = [] - for classSet in classes: - exemplarGlyph = next(iter(classSet)) - if exemplarGlyph not in coverage: - # Follow-up to e6125b353e1f54a0280ded5434b8e40d042de69f, - # Fixes https://github.com/googlei18n/fontmake/issues/470 - # Again, revert 8d441779e5afc664960d848f62c7acdbfc71d7b9 - # when merger becomes selfless. - nullRow = None - if nullRow is None: - nullRow = ot.Class1Record() - class2records = nullRow.Class2Record = [] - # TODO: When merger becomes selfless, revert e6125b353e1f54a0280ded5434b8e40d042de69f - for _ in range(l.Class2Count): - if transparent: - rec2 = None - else: - rec2 = ot.Class2Record() - rec2.Value1 = ( - otBase.ValueRecord(self.ValueFormat1) - if self.ValueFormat1 - else None - ) - rec2.Value2 = ( - otBase.ValueRecord(self.ValueFormat2) - if self.ValueFormat2 - else None - ) - class2records.append(rec2) - rec1 = nullRow - else: - klass = classDef1.get(exemplarGlyph, 0) - rec1 = matrix[klass] # TODO handle out-of-range? - class1Records.append(rec1) - new_matrices.append(class1Records) - matrices = new_matrices - del new_matrices - - # Align second classes - self.ClassDef2, classes = _ClassDef_merge_classify([l.ClassDef2 for l in lst]) - self.Class2Count = len(classes) - new_matrices = [] - for l, matrix in zip(lst, matrices): - classDef2 = l.ClassDef2.classDefs - class1Records = [] - for rec1old in matrix: - oldClass2Records = rec1old.Class2Record - rec1new = ot.Class1Record() - class2Records = rec1new.Class2Record = [] - for classSet in classes: - if not classSet: # class=0 - rec2 = oldClass2Records[0] - else: - exemplarGlyph = next(iter(classSet)) - klass = classDef2.get(exemplarGlyph, 0) - rec2 = oldClass2Records[klass] - class2Records.append(copy.deepcopy(rec2)) - class1Records.append(rec1new) - new_matrices.append(class1Records) - matrices = new_matrices - del new_matrices - - return matrices - - -def _PairPosFormat2_merge(self, lst, merger): - assert allEqual( - [l.ValueFormat2 == 0 for l in lst if l.Class1Record] - ), "Report bug against fonttools." - - merger.mergeObjects( - self, - lst, - exclude=( - "Coverage", - "ClassDef1", - "Class1Count", - "ClassDef2", - "Class2Count", - "Class1Record", - "ValueFormat1", - "ValueFormat2", - ), - ) - - # Align coverages - glyphs, _ = _merge_GlyphOrders(merger.font, [v.Coverage.glyphs for v in lst]) - self.Coverage.glyphs = glyphs - - # Currently, if the coverage of PairPosFormat2 subtables are different, - # we do NOT bother walking down the subtable list when filling in new - # rows for alignment. As such, this is only correct if current subtable - # is the last subtable in the lookup. Ensure that. - # - # Note that our canonicalization process merges trailing PairPosFormat2's, - # so in reality this is rare. - for l, subtables in zip(lst, merger.lookup_subtables): - if l.Coverage.glyphs != glyphs: - assert l == subtables[-1] - - matrices = _PairPosFormat2_align_matrices(self, lst, merger.font) - - self.Class1Record = list(matrices[0]) # TODO move merger to be selfless - merger.mergeLists(self.Class1Record, matrices) - - -@AligningMerger.merger(ot.PairPos) -def merge(merger, self, lst): - merger.valueFormat1 = self.ValueFormat1 = reduce( - int.__or__, [l.ValueFormat1 for l in lst], 0 - ) - merger.valueFormat2 = self.ValueFormat2 = reduce( - int.__or__, [l.ValueFormat2 for l in lst], 0 - ) - - if self.Format == 1: - _PairPosFormat1_merge(self, lst, merger) - elif self.Format == 2: - _PairPosFormat2_merge(self, lst, merger) - else: - raise UnsupportedFormat(merger, subtable="pair positioning lookup") - - del merger.valueFormat1, merger.valueFormat2 - - # Now examine the list of value records, and update to the union of format values, - # as merge might have created new values. - vf1 = 0 - vf2 = 0 - if self.Format == 1: - for pairSet in self.PairSet: - for pairValueRecord in pairSet.PairValueRecord: - pv1 = getattr(pairValueRecord, "Value1", None) - if pv1 is not None: - vf1 |= pv1.getFormat() - pv2 = getattr(pairValueRecord, "Value2", None) - if pv2 is not None: - vf2 |= pv2.getFormat() - elif self.Format == 2: - for class1Record in self.Class1Record: - for class2Record in class1Record.Class2Record: - pv1 = getattr(class2Record, "Value1", None) - if pv1 is not None: - vf1 |= pv1.getFormat() - pv2 = getattr(class2Record, "Value2", None) - if pv2 is not None: - vf2 |= pv2.getFormat() - self.ValueFormat1 = vf1 - self.ValueFormat2 = vf2 - - -def _MarkBasePosFormat1_merge(self, lst, merger, Mark="Mark", Base="Base"): - self.ClassCount = max(l.ClassCount for l in lst) - - MarkCoverageGlyphs, MarkRecords = _merge_GlyphOrders( - merger.font, - [getattr(l, Mark + "Coverage").glyphs for l in lst], - [getattr(l, Mark + "Array").MarkRecord for l in lst], - ) - getattr(self, Mark + "Coverage").glyphs = MarkCoverageGlyphs - - BaseCoverageGlyphs, BaseRecords = _merge_GlyphOrders( - merger.font, - [getattr(l, Base + "Coverage").glyphs for l in lst], - [getattr(getattr(l, Base + "Array"), Base + "Record") for l in lst], - ) - getattr(self, Base + "Coverage").glyphs = BaseCoverageGlyphs - - # MarkArray - records = [] - for g, glyphRecords in zip(MarkCoverageGlyphs, zip(*MarkRecords)): - allClasses = [r.Class for r in glyphRecords if r is not None] - - # TODO Right now we require that all marks have same class in - # all masters that cover them. This is not required. - # - # We can relax that by just requiring that all marks that have - # the same class in a master, have the same class in every other - # master. Indeed, if, say, a sparse master only covers one mark, - # that mark probably will get class 0, which would possibly be - # different from its class in other masters. - # - # We can even go further and reclassify marks to support any - # input. But, since, it's unlikely that two marks being both, - # say, "top" in one master, and one being "top" and other being - # "top-right" in another master, we shouldn't do that, as any - # failures in that case will probably signify mistakes in the - # input masters. - - if not allEqual(allClasses): - raise ShouldBeConstant(merger, expected=allClasses[0], got=allClasses) - else: - rec = ot.MarkRecord() - rec.Class = allClasses[0] - allAnchors = [None if r is None else r.MarkAnchor for r in glyphRecords] - if allNone(allAnchors): - anchor = None - else: - anchor = ot.Anchor() - anchor.Format = 1 - merger.mergeThings(anchor, allAnchors) - rec.MarkAnchor = anchor - records.append(rec) - array = ot.MarkArray() - array.MarkRecord = records - array.MarkCount = len(records) - setattr(self, Mark + "Array", array) - - # BaseArray - records = [] - for g, glyphRecords in zip(BaseCoverageGlyphs, zip(*BaseRecords)): - if allNone(glyphRecords): - rec = None - else: - rec = getattr(ot, Base + "Record")() - anchors = [] - setattr(rec, Base + "Anchor", anchors) - glyphAnchors = [ - [] if r is None else getattr(r, Base + "Anchor") for r in glyphRecords - ] - for l in glyphAnchors: - l.extend([None] * (self.ClassCount - len(l))) - for allAnchors in zip(*glyphAnchors): - if allNone(allAnchors): - anchor = None - else: - anchor = ot.Anchor() - anchor.Format = 1 - merger.mergeThings(anchor, allAnchors) - anchors.append(anchor) - records.append(rec) - array = getattr(ot, Base + "Array")() - setattr(array, Base + "Record", records) - setattr(array, Base + "Count", len(records)) - setattr(self, Base + "Array", array) - - -@AligningMerger.merger(ot.MarkBasePos) -def merge(merger, self, lst): - if not allEqualTo(self.Format, (l.Format for l in lst)): - raise InconsistentFormats( - merger, - subtable="mark-to-base positioning lookup", - expected=self.Format, - got=[l.Format for l in lst], - ) - if self.Format == 1: - _MarkBasePosFormat1_merge(self, lst, merger) - else: - raise UnsupportedFormat(merger, subtable="mark-to-base positioning lookup") - - -@AligningMerger.merger(ot.MarkMarkPos) -def merge(merger, self, lst): - if not allEqualTo(self.Format, (l.Format for l in lst)): - raise InconsistentFormats( - merger, - subtable="mark-to-mark positioning lookup", - expected=self.Format, - got=[l.Format for l in lst], - ) - if self.Format == 1: - _MarkBasePosFormat1_merge(self, lst, merger, "Mark1", "Mark2") - else: - raise UnsupportedFormat(merger, subtable="mark-to-mark positioning lookup") - - -def _PairSet_flatten(lst, font): - self = ot.PairSet() - self.Coverage = ot.Coverage() - - # Align them - glyphs, padded = _merge_GlyphOrders( - font, - [[v.SecondGlyph for v in vs.PairValueRecord] for vs in lst], - [vs.PairValueRecord for vs in lst], - ) - - self.Coverage.glyphs = glyphs - self.PairValueRecord = pvrs = [] - for values in zip(*padded): - for v in values: - if v is not None: - pvrs.append(v) - break - else: - assert False - self.PairValueCount = len(self.PairValueRecord) - - return self - - -def _Lookup_PairPosFormat1_subtables_flatten(lst, font): - assert allEqual( - [l.ValueFormat2 == 0 for l in lst if l.PairSet] - ), "Report bug against fonttools." - - self = ot.PairPos() - self.Format = 1 - self.Coverage = ot.Coverage() - self.ValueFormat1 = reduce(int.__or__, [l.ValueFormat1 for l in lst], 0) - self.ValueFormat2 = reduce(int.__or__, [l.ValueFormat2 for l in lst], 0) - - # Align them - glyphs, padded = _merge_GlyphOrders( - font, [v.Coverage.glyphs for v in lst], [v.PairSet for v in lst] - ) - - self.Coverage.glyphs = glyphs - self.PairSet = [ - _PairSet_flatten([v for v in values if v is not None], font) - for values in zip(*padded) - ] - self.PairSetCount = len(self.PairSet) - return self - - -def _Lookup_PairPosFormat2_subtables_flatten(lst, font): - assert allEqual( - [l.ValueFormat2 == 0 for l in lst if l.Class1Record] - ), "Report bug against fonttools." - - self = ot.PairPos() - self.Format = 2 - self.Coverage = ot.Coverage() - self.ValueFormat1 = reduce(int.__or__, [l.ValueFormat1 for l in lst], 0) - self.ValueFormat2 = reduce(int.__or__, [l.ValueFormat2 for l in lst], 0) - - # Align them - glyphs, _ = _merge_GlyphOrders(font, [v.Coverage.glyphs for v in lst]) - self.Coverage.glyphs = glyphs - - matrices = _PairPosFormat2_align_matrices(self, lst, font, transparent=True) - - matrix = self.Class1Record = [] - for rows in zip(*matrices): - row = ot.Class1Record() - matrix.append(row) - row.Class2Record = [] - row = row.Class2Record - for cols in zip(*list(r.Class2Record for r in rows)): - col = next(iter(c for c in cols if c is not None)) - row.append(col) - - return self - - -def _Lookup_PairPos_subtables_canonicalize(lst, font): - """Merge multiple Format1 subtables at the beginning of lst, - and merge multiple consecutive Format2 subtables that have the same - Class2 (ie. were split because of offset overflows). Returns new list.""" - lst = list(lst) - - l = len(lst) - i = 0 - while i < l and lst[i].Format == 1: - i += 1 - lst[:i] = [_Lookup_PairPosFormat1_subtables_flatten(lst[:i], font)] - - l = len(lst) - i = l - while i > 0 and lst[i - 1].Format == 2: - i -= 1 - lst[i:] = [_Lookup_PairPosFormat2_subtables_flatten(lst[i:], font)] - - return lst - - -def _Lookup_SinglePos_subtables_flatten(lst, font, min_inclusive_rec_format): - glyphs, _ = _merge_GlyphOrders(font, [v.Coverage.glyphs for v in lst], None) - num_glyphs = len(glyphs) - new = ot.SinglePos() - new.Format = 2 - new.ValueFormat = min_inclusive_rec_format - new.Coverage = ot.Coverage() - new.Coverage.glyphs = glyphs - new.ValueCount = num_glyphs - new.Value = [None] * num_glyphs - for singlePos in lst: - if singlePos.Format == 1: - val_rec = singlePos.Value - for gname in singlePos.Coverage.glyphs: - i = glyphs.index(gname) - new.Value[i] = copy.deepcopy(val_rec) - elif singlePos.Format == 2: - for j, gname in enumerate(singlePos.Coverage.glyphs): - val_rec = singlePos.Value[j] - i = glyphs.index(gname) - new.Value[i] = copy.deepcopy(val_rec) - return [new] - - -@AligningMerger.merger(ot.CursivePos) -def merge(merger, self, lst): - # Align them - glyphs, padded = _merge_GlyphOrders( - merger.font, - [l.Coverage.glyphs for l in lst], - [l.EntryExitRecord for l in lst], - ) - - self.Format = 1 - self.Coverage = ot.Coverage() - self.Coverage.glyphs = glyphs - self.EntryExitRecord = [] - for _ in glyphs: - rec = ot.EntryExitRecord() - rec.EntryAnchor = ot.Anchor() - rec.EntryAnchor.Format = 1 - rec.ExitAnchor = ot.Anchor() - rec.ExitAnchor.Format = 1 - self.EntryExitRecord.append(rec) - merger.mergeLists(self.EntryExitRecord, padded) - self.EntryExitCount = len(self.EntryExitRecord) - - -@AligningMerger.merger(ot.Lookup) -def merge(merger, self, lst): - subtables = merger.lookup_subtables = [l.SubTable for l in lst] - - # Remove Extension subtables - for l, sts in list(zip(lst, subtables)) + [(self, self.SubTable)]: - if not sts: - continue - if sts[0].__class__.__name__.startswith("Extension"): - if not allEqual([st.__class__ for st in sts]): - raise InconsistentExtensions( - merger, - expected="Extension", - got=[st.__class__.__name__ for st in sts], - ) - if not allEqual([st.ExtensionLookupType for st in sts]): - raise InconsistentExtensions(merger) - l.LookupType = sts[0].ExtensionLookupType - new_sts = [st.ExtSubTable for st in sts] - del sts[:] - sts.extend(new_sts) - - isPairPos = self.SubTable and isinstance(self.SubTable[0], ot.PairPos) - - if isPairPos: - # AFDKO and feaLib sometimes generate two Format1 subtables instead of one. - # Merge those before continuing. - # https://github.com/fonttools/fonttools/issues/719 - self.SubTable = _Lookup_PairPos_subtables_canonicalize( - self.SubTable, merger.font - ) - subtables = merger.lookup_subtables = [ - _Lookup_PairPos_subtables_canonicalize(st, merger.font) for st in subtables - ] - else: - isSinglePos = self.SubTable and isinstance(self.SubTable[0], ot.SinglePos) - if isSinglePos: - numSubtables = [len(st) for st in subtables] - if not all([nums == numSubtables[0] for nums in numSubtables]): - # Flatten list of SinglePos subtables to single Format 2 subtable, - # with all value records set to the rec format type. - # We use buildSinglePos() to optimize the lookup after merging. - valueFormatList = [t.ValueFormat for st in subtables for t in st] - # Find the minimum value record that can accomodate all the singlePos subtables. - mirf = reduce(ior, valueFormatList) - self.SubTable = _Lookup_SinglePos_subtables_flatten( - self.SubTable, merger.font, mirf - ) - subtables = merger.lookup_subtables = [ - _Lookup_SinglePos_subtables_flatten(st, merger.font, mirf) - for st in subtables - ] - flattened = True - else: - flattened = False - - merger.mergeLists(self.SubTable, subtables) - self.SubTableCount = len(self.SubTable) - - if isPairPos: - # If format-1 subtable created during canonicalization is empty, remove it. - assert len(self.SubTable) >= 1 and self.SubTable[0].Format == 1 - if not self.SubTable[0].Coverage.glyphs: - self.SubTable.pop(0) - self.SubTableCount -= 1 - - # If format-2 subtable created during canonicalization is empty, remove it. - assert len(self.SubTable) >= 1 and self.SubTable[-1].Format == 2 - if not self.SubTable[-1].Coverage.glyphs: - self.SubTable.pop(-1) - self.SubTableCount -= 1 - - # Compact the merged subtables - # This is a good moment to do it because the compaction should create - # smaller subtables, which may prevent overflows from happening. - # Keep reading the value from the ENV until ufo2ft switches to the config system - level = merger.font.cfg.get( - "fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL", - default=_compression_level_from_env(), - ) - if level != 0: - log.info("Compacting GPOS...") - self.SubTable = compact_pair_pos(merger.font, level, self.SubTable) - self.SubTableCount = len(self.SubTable) - - elif isSinglePos and flattened: - singlePosTable = self.SubTable[0] - glyphs = singlePosTable.Coverage.glyphs - # We know that singlePosTable is Format 2, as this is set - # in _Lookup_SinglePos_subtables_flatten. - singlePosMapping = { - gname: valRecord for gname, valRecord in zip(glyphs, singlePosTable.Value) - } - self.SubTable = buildSinglePos( - singlePosMapping, merger.font.getReverseGlyphMap() - ) - merger.mergeObjects(self, lst, exclude=["SubTable", "SubTableCount"]) - - del merger.lookup_subtables - - -# -# InstancerMerger -# - - -class InstancerMerger(AligningMerger): - """A merger that takes multiple master fonts, and instantiates - an instance.""" - - def __init__(self, font, model, location): - Merger.__init__(self, font) - self.model = model - self.location = location - self.scalars = model.getScalars(location) - - -@InstancerMerger.merger(ot.CaretValue) -def merge(merger, self, lst): - assert self.Format == 1 - Coords = [a.Coordinate for a in lst] - model = merger.model - scalars = merger.scalars - self.Coordinate = otRound(model.interpolateFromMastersAndScalars(Coords, scalars)) - - -@InstancerMerger.merger(ot.Anchor) -def merge(merger, self, lst): - assert self.Format == 1 - XCoords = [a.XCoordinate for a in lst] - YCoords = [a.YCoordinate for a in lst] - model = merger.model - scalars = merger.scalars - self.XCoordinate = otRound(model.interpolateFromMastersAndScalars(XCoords, scalars)) - self.YCoordinate = otRound(model.interpolateFromMastersAndScalars(YCoords, scalars)) - - -@InstancerMerger.merger(otBase.ValueRecord) -def merge(merger, self, lst): - model = merger.model - scalars = merger.scalars - # TODO Handle differing valueformats - for name, tableName in [ - ("XAdvance", "XAdvDevice"), - ("YAdvance", "YAdvDevice"), - ("XPlacement", "XPlaDevice"), - ("YPlacement", "YPlaDevice"), - ]: - assert not hasattr(self, tableName) - - if hasattr(self, name): - values = [getattr(a, name, 0) for a in lst] - value = otRound(model.interpolateFromMastersAndScalars(values, scalars)) - setattr(self, name, value) - - -# -# MutatorMerger -# - - -class MutatorMerger(AligningMerger): - """A merger that takes a variable font, and instantiates - an instance. While there's no "merging" to be done per se, - the operation can benefit from many operations that the - aligning merger does.""" - - def __init__(self, font, instancer, deleteVariations=True): - Merger.__init__(self, font) - self.instancer = instancer - self.deleteVariations = deleteVariations - - -@MutatorMerger.merger(ot.CaretValue) -def merge(merger, self, lst): - # Hack till we become selfless. - self.__dict__ = lst[0].__dict__.copy() - - if self.Format != 3: - return - - instancer = merger.instancer - dev = self.DeviceTable - if merger.deleteVariations: - del self.DeviceTable - if dev: - assert dev.DeltaFormat == 0x8000 - varidx = (dev.StartSize << 16) + dev.EndSize - delta = otRound(instancer[varidx]) - self.Coordinate += delta - - if merger.deleteVariations: - self.Format = 1 - - -@MutatorMerger.merger(ot.Anchor) -def merge(merger, self, lst): - # Hack till we become selfless. - self.__dict__ = lst[0].__dict__.copy() - - if self.Format != 3: - return - - instancer = merger.instancer - for v in "XY": - tableName = v + "DeviceTable" - if not hasattr(self, tableName): - continue - dev = getattr(self, tableName) - if merger.deleteVariations: - delattr(self, tableName) - if dev is None: - continue - - assert dev.DeltaFormat == 0x8000 - varidx = (dev.StartSize << 16) + dev.EndSize - delta = otRound(instancer[varidx]) - - attr = v + "Coordinate" - setattr(self, attr, getattr(self, attr) + delta) - - if merger.deleteVariations: - self.Format = 1 - - -@MutatorMerger.merger(otBase.ValueRecord) -def merge(merger, self, lst): - # Hack till we become selfless. - self.__dict__ = lst[0].__dict__.copy() - - instancer = merger.instancer - for name, tableName in [ - ("XAdvance", "XAdvDevice"), - ("YAdvance", "YAdvDevice"), - ("XPlacement", "XPlaDevice"), - ("YPlacement", "YPlaDevice"), - ]: - if not hasattr(self, tableName): - continue - dev = getattr(self, tableName) - if merger.deleteVariations: - delattr(self, tableName) - if dev is None: - continue - - assert dev.DeltaFormat == 0x8000 - varidx = (dev.StartSize << 16) + dev.EndSize - delta = otRound(instancer[varidx]) - - setattr(self, name, getattr(self, name, 0) + delta) - - -# -# VariationMerger -# - - -class VariationMerger(AligningMerger): - """A merger that takes multiple master fonts, and builds a - variable font.""" - - def __init__(self, model, axisTags, font): - Merger.__init__(self, font) - self.store_builder = varStore.OnlineVarStoreBuilder(axisTags) - self.setModel(model) - - def setModel(self, model): - self.model = model - self.store_builder.setModel(model) - - def mergeThings(self, out, lst): - masterModel = None - origTTFs = None - if None in lst: - if allNone(lst): - if out is not None: - raise FoundANone(self, got=lst) - return - - # temporarily subset the list of master ttfs to the ones for which - # master values are not None - origTTFs = self.ttfs - if self.ttfs: - self.ttfs = subList([v is not None for v in lst], self.ttfs) - - masterModel = self.model - model, lst = masterModel.getSubModel(lst) - self.setModel(model) - - super(VariationMerger, self).mergeThings(out, lst) - - if masterModel: - self.setModel(masterModel) - if origTTFs: - self.ttfs = origTTFs - - -def buildVarDevTable(store_builder, master_values): - if allEqual(master_values): - return master_values[0], None - base, varIdx = store_builder.storeMasters(master_values) - return base, builder.buildVarDevTable(varIdx) - - -@VariationMerger.merger(ot.BaseCoord) -def merge(merger, self, lst): - if self.Format != 1: - raise UnsupportedFormat(merger, subtable="a baseline coordinate") - self.Coordinate, DeviceTable = buildVarDevTable( - merger.store_builder, [a.Coordinate for a in lst] - ) - if DeviceTable: - self.Format = 3 - self.DeviceTable = DeviceTable - - -@VariationMerger.merger(ot.CaretValue) -def merge(merger, self, lst): - if self.Format != 1: - raise UnsupportedFormat(merger, subtable="a caret") - self.Coordinate, DeviceTable = buildVarDevTable( - merger.store_builder, [a.Coordinate for a in lst] - ) - if DeviceTable: - self.Format = 3 - self.DeviceTable = DeviceTable - - -@VariationMerger.merger(ot.Anchor) -def merge(merger, self, lst): - if self.Format != 1: - raise UnsupportedFormat(merger, subtable="an anchor") - self.XCoordinate, XDeviceTable = buildVarDevTable( - merger.store_builder, [a.XCoordinate for a in lst] - ) - self.YCoordinate, YDeviceTable = buildVarDevTable( - merger.store_builder, [a.YCoordinate for a in lst] - ) - if XDeviceTable or YDeviceTable: - self.Format = 3 - self.XDeviceTable = XDeviceTable - self.YDeviceTable = YDeviceTable - - -@VariationMerger.merger(otBase.ValueRecord) -def merge(merger, self, lst): - for name, tableName in [ - ("XAdvance", "XAdvDevice"), - ("YAdvance", "YAdvDevice"), - ("XPlacement", "XPlaDevice"), - ("YPlacement", "YPlaDevice"), - ]: - if hasattr(self, name): - value, deviceTable = buildVarDevTable( - merger.store_builder, [getattr(a, name, 0) for a in lst] - ) - setattr(self, name, value) - if deviceTable: - setattr(self, tableName, deviceTable) - - -class COLRVariationMerger(VariationMerger): - """A specialized VariationMerger that takes multiple master fonts containing - COLRv1 tables, and builds a variable COLR font. - - COLR tables are special in that variable subtables can be associated with - multiple delta-set indices (via VarIndexBase). - They also contain tables that must change their type (not simply the Format) - as they become variable (e.g. Affine2x3 -> VarAffine2x3) so this merger takes - care of that too. - """ - - def __init__(self, model, axisTags, font, allowLayerReuse=True): - VariationMerger.__init__(self, model, axisTags, font) - # maps {tuple(varIdxes): VarIndexBase} to facilitate reuse of VarIndexBase - # between variable tables with same varIdxes. - self.varIndexCache = {} - # flat list of all the varIdxes generated while merging - self.varIdxes = [] - # set of id()s of the subtables that contain variations after merging - # and need to be upgraded to the associated VarType. - self.varTableIds = set() - # we keep these around for rebuilding a LayerList while merging PaintColrLayers - self.layers = [] - self.layerReuseCache = None - if allowLayerReuse: - self.layerReuseCache = LayerReuseCache() - # flag to ensure BaseGlyphList is fully merged before LayerList gets processed - self._doneBaseGlyphs = False - - def mergeTables(self, font, master_ttfs, tableTags=("COLR",)): - if "COLR" in tableTags and "COLR" in font: - # The merger modifies the destination COLR table in-place. If this contains - # multiple PaintColrLayers referencing the same layers from LayerList, it's - # a problem because we may risk modifying the same paint more than once, or - # worse, fail while attempting to do that. - # We don't know whether the master COLR table was built with layer reuse - # disabled, thus to be safe we rebuild its LayerList so that it contains only - # unique layers referenced from non-overlapping PaintColrLayers throughout - # the base paint graphs. - self.expandPaintColrLayers(font["COLR"].table) - VariationMerger.mergeTables(self, font, master_ttfs, tableTags) - - def checkFormatEnum(self, out, lst, validate=lambda _: True): - fmt = out.Format - formatEnum = out.formatEnum - ok = False - try: - fmt = formatEnum(fmt) - except ValueError: - pass - else: - ok = validate(fmt) - if not ok: - raise UnsupportedFormat(self, subtable=type(out).__name__, value=fmt) - expected = fmt - got = [] - for v in lst: - fmt = getattr(v, "Format", None) - try: - fmt = formatEnum(fmt) - except ValueError: - pass - got.append(fmt) - if not allEqualTo(expected, got): - raise InconsistentFormats( - self, - subtable=type(out).__name__, - expected=expected, - got=got, - ) - return expected - - def mergeSparseDict(self, out, lst): - for k in out.keys(): - try: - self.mergeThings(out[k], [v.get(k) for v in lst]) - except VarLibMergeError as e: - e.stack.append(f"[{k!r}]") - raise - - def mergeAttrs(self, out, lst, attrs): - for attr in attrs: - value = getattr(out, attr) - values = [getattr(item, attr) for item in lst] - try: - self.mergeThings(value, values) - except VarLibMergeError as e: - e.stack.append(f".{attr}") - raise - - def storeMastersForAttr(self, out, lst, attr): - master_values = [getattr(item, attr) for item in lst] - - # VarStore treats deltas for fixed-size floats as integers, so we - # must convert master values to int before storing them in the builder - # then back to float. - is_fixed_size_float = False - conv = out.getConverterByName(attr) - if isinstance(conv, BaseFixedValue): - is_fixed_size_float = True - master_values = [conv.toInt(v) for v in master_values] - - baseValue = master_values[0] - varIdx = ot.NO_VARIATION_INDEX - if not allEqual(master_values): - baseValue, varIdx = self.store_builder.storeMasters(master_values) - - if is_fixed_size_float: - baseValue = conv.fromInt(baseValue) - - return baseValue, varIdx - - def storeVariationIndices(self, varIdxes) -> int: - # try to reuse an existing VarIndexBase for the same varIdxes, or else - # create a new one - key = tuple(varIdxes) - varIndexBase = self.varIndexCache.get(key) - - if varIndexBase is None: - # scan for a full match anywhere in the self.varIdxes - for i in range(len(self.varIdxes) - len(varIdxes) + 1): - if self.varIdxes[i : i + len(varIdxes)] == varIdxes: - self.varIndexCache[key] = varIndexBase = i - break - - if varIndexBase is None: - # try find a partial match at the end of the self.varIdxes - for n in range(len(varIdxes) - 1, 0, -1): - if self.varIdxes[-n:] == varIdxes[:n]: - varIndexBase = len(self.varIdxes) - n - self.varIndexCache[key] = varIndexBase - self.varIdxes.extend(varIdxes[n:]) - break - - if varIndexBase is None: - # no match found, append at the end - self.varIndexCache[key] = varIndexBase = len(self.varIdxes) - self.varIdxes.extend(varIdxes) - - return varIndexBase - - def mergeVariableAttrs(self, out, lst, attrs) -> int: - varIndexBase = ot.NO_VARIATION_INDEX - varIdxes = [] - for attr in attrs: - baseValue, varIdx = self.storeMastersForAttr(out, lst, attr) - setattr(out, attr, baseValue) - varIdxes.append(varIdx) - - if any(v != ot.NO_VARIATION_INDEX for v in varIdxes): - varIndexBase = self.storeVariationIndices(varIdxes) - - return varIndexBase - - @classmethod - def convertSubTablesToVarType(cls, table): - for path in dfs_base_table( - table, - skip_root=True, - predicate=lambda path: ( - getattr(type(path[-1].value), "VarType", None) is not None - ), - ): - st = path[-1] - subTable = st.value - varType = type(subTable).VarType - newSubTable = varType() - newSubTable.__dict__.update(subTable.__dict__) - newSubTable.populateDefaults() - parent = path[-2].value - if st.index is not None: - getattr(parent, st.name)[st.index] = newSubTable - else: - setattr(parent, st.name, newSubTable) - - @staticmethod - def expandPaintColrLayers(colr): - """Rebuild LayerList without PaintColrLayers reuse. - - Each base paint graph is fully DFS-traversed (with exception of PaintColrGlyph - which are irrelevant for this); any layers referenced via PaintColrLayers are - collected into a new LayerList and duplicated when reuse is detected, to ensure - that all paints are distinct objects at the end of the process. - PaintColrLayers's FirstLayerIndex/NumLayers are updated so that no overlap - is left. Also, any consecutively nested PaintColrLayers are flattened. - The COLR table's LayerList is replaced with the new unique layers. - A side effect is also that any layer from the old LayerList which is not - referenced by any PaintColrLayers is dropped. - """ - if not colr.LayerList: - # if no LayerList, there's nothing to expand - return - uniqueLayerIDs = set() - newLayerList = [] - for rec in colr.BaseGlyphList.BaseGlyphPaintRecord: - frontier = [rec.Paint] - while frontier: - paint = frontier.pop() - if paint.Format == ot.PaintFormat.PaintColrGlyph: - # don't traverse these, we treat them as constant for merging - continue - elif paint.Format == ot.PaintFormat.PaintColrLayers: - # de-treeify any nested PaintColrLayers, append unique copies to - # the new layer list and update PaintColrLayers index/count - children = list(_flatten_layers(paint, colr)) - first_layer_index = len(newLayerList) - for layer in children: - if id(layer) in uniqueLayerIDs: - layer = copy.deepcopy(layer) - assert id(layer) not in uniqueLayerIDs - newLayerList.append(layer) - uniqueLayerIDs.add(id(layer)) - paint.FirstLayerIndex = first_layer_index - paint.NumLayers = len(children) - else: - children = paint.getChildren(colr) - frontier.extend(reversed(children)) - # sanity check all the new layers are distinct objects - assert len(newLayerList) == len(uniqueLayerIDs) - colr.LayerList.Paint = newLayerList - colr.LayerList.LayerCount = len(newLayerList) - - -@COLRVariationMerger.merger(ot.BaseGlyphList) -def merge(merger, self, lst): - # ignore BaseGlyphCount, allow sparse glyph sets across masters - out = {rec.BaseGlyph: rec for rec in self.BaseGlyphPaintRecord} - masters = [{rec.BaseGlyph: rec for rec in m.BaseGlyphPaintRecord} for m in lst] - - for i, g in enumerate(out.keys()): - try: - # missing base glyphs don't participate in the merge - merger.mergeThings(out[g], [v.get(g) for v in masters]) - except VarLibMergeError as e: - e.stack.append(f".BaseGlyphPaintRecord[{i}]") - e.cause["location"] = f"base glyph {g!r}" - raise - - merger._doneBaseGlyphs = True - - -@COLRVariationMerger.merger(ot.LayerList) -def merge(merger, self, lst): - # nothing to merge for LayerList, assuming we have already merged all PaintColrLayers - # found while traversing the paint graphs rooted at BaseGlyphPaintRecords. - assert merger._doneBaseGlyphs, "BaseGlyphList must be merged before LayerList" - # Simply flush the final list of layers and go home. - self.LayerCount = len(merger.layers) - self.Paint = merger.layers - - -def _flatten_layers(root, colr): - assert root.Format == ot.PaintFormat.PaintColrLayers - for paint in root.getChildren(colr): - if paint.Format == ot.PaintFormat.PaintColrLayers: - yield from _flatten_layers(paint, colr) - else: - yield paint - - -def _merge_PaintColrLayers(self, out, lst): - # we only enforce that the (flat) number of layers is the same across all masters - # but we allow FirstLayerIndex to differ to acommodate for sparse glyph sets. - - out_layers = list(_flatten_layers(out, self.font["COLR"].table)) - - # sanity check ttfs are subset to current values (see VariationMerger.mergeThings) - # before matching each master PaintColrLayers to its respective COLR by position - assert len(self.ttfs) == len(lst) - master_layerses = [ - list(_flatten_layers(lst[i], self.ttfs[i]["COLR"].table)) - for i in range(len(lst)) - ] - - try: - self.mergeLists(out_layers, master_layerses) - except VarLibMergeError as e: - # NOTE: This attribute doesn't actually exist in PaintColrLayers but it's - # handy to have it in the stack trace for debugging. - e.stack.append(".Layers") - raise - - # following block is very similar to LayerListBuilder._beforeBuildPaintColrLayers - # but I couldn't find a nice way to share the code between the two... - - if self.layerReuseCache is not None: - # successful reuse can make the list smaller - out_layers = self.layerReuseCache.try_reuse(out_layers) - - # if the list is still too big we need to tree-fy it - is_tree = len(out_layers) > MAX_PAINT_COLR_LAYER_COUNT - out_layers = build_n_ary_tree(out_layers, n=MAX_PAINT_COLR_LAYER_COUNT) - - # We now have a tree of sequences with Paint leaves. - # Convert the sequences into PaintColrLayers. - def listToColrLayers(paint): - if isinstance(paint, list): - layers = [listToColrLayers(l) for l in paint] - paint = ot.Paint() - paint.Format = int(ot.PaintFormat.PaintColrLayers) - paint.NumLayers = len(layers) - paint.FirstLayerIndex = len(self.layers) - self.layers.extend(layers) - if self.layerReuseCache is not None: - self.layerReuseCache.add(layers, paint.FirstLayerIndex) - return paint - - out_layers = [listToColrLayers(l) for l in out_layers] - - if len(out_layers) == 1 and out_layers[0].Format == ot.PaintFormat.PaintColrLayers: - # special case when the reuse cache finds a single perfect PaintColrLayers match - # (it can only come from a successful reuse, _flatten_layers has gotten rid of - # all nested PaintColrLayers already); we assign it directly and avoid creating - # an extra table - out.NumLayers = out_layers[0].NumLayers - out.FirstLayerIndex = out_layers[0].FirstLayerIndex - else: - out.NumLayers = len(out_layers) - out.FirstLayerIndex = len(self.layers) - - self.layers.extend(out_layers) - - # Register our parts for reuse provided we aren't a tree - # If we are a tree the leaves registered for reuse and that will suffice - if self.layerReuseCache is not None and not is_tree: - self.layerReuseCache.add(out_layers, out.FirstLayerIndex) - - -@COLRVariationMerger.merger((ot.Paint, ot.ClipBox)) -def merge(merger, self, lst): - fmt = merger.checkFormatEnum(self, lst, lambda fmt: not fmt.is_variable()) - - if fmt is ot.PaintFormat.PaintColrLayers: - _merge_PaintColrLayers(merger, self, lst) - return - - varFormat = fmt.as_variable() - - varAttrs = () - if varFormat is not None: - varAttrs = otBase.getVariableAttrs(type(self), varFormat) - staticAttrs = (c.name for c in self.getConverters() if c.name not in varAttrs) - - merger.mergeAttrs(self, lst, staticAttrs) - - varIndexBase = merger.mergeVariableAttrs(self, lst, varAttrs) - - subTables = [st.value for st in self.iterSubTables()] - - # Convert table to variable if itself has variations or any subtables have - isVariable = varIndexBase != ot.NO_VARIATION_INDEX or any( - id(table) in merger.varTableIds for table in subTables - ) - - if isVariable: - if varAttrs: - # Some PaintVar* don't have any scalar attributes that can vary, - # only indirect offsets to other variable subtables, thus have - # no VarIndexBase of their own (e.g. PaintVarTransform) - self.VarIndexBase = varIndexBase - - if subTables: - # Convert Affine2x3 -> VarAffine2x3, ColorLine -> VarColorLine, etc. - merger.convertSubTablesToVarType(self) - - assert varFormat is not None - self.Format = int(varFormat) - - -@COLRVariationMerger.merger((ot.Affine2x3, ot.ColorStop)) -def merge(merger, self, lst): - varType = type(self).VarType - - varAttrs = otBase.getVariableAttrs(varType) - staticAttrs = (c.name for c in self.getConverters() if c.name not in varAttrs) - - merger.mergeAttrs(self, lst, staticAttrs) - - varIndexBase = merger.mergeVariableAttrs(self, lst, varAttrs) - - if varIndexBase != ot.NO_VARIATION_INDEX: - self.VarIndexBase = varIndexBase - # mark as having variations so the parent table will convert to Var{Type} - merger.varTableIds.add(id(self)) - - -@COLRVariationMerger.merger(ot.ColorLine) -def merge(merger, self, lst): - merger.mergeAttrs(self, lst, (c.name for c in self.getConverters())) - - if any(id(stop) in merger.varTableIds for stop in self.ColorStop): - merger.convertSubTablesToVarType(self) - merger.varTableIds.add(id(self)) - - -@COLRVariationMerger.merger(ot.ClipList, "clips") -def merge(merger, self, lst): - # 'sparse' in that we allow non-default masters to omit ClipBox entries - # for some/all glyphs (i.e. they don't participate) - merger.mergeSparseDict(self, lst) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3ca142e0.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3ca142e0.css deleted file mode 100644 index 77ebe6c1fea2e3557f76088bb9f5c30e2cfdb72a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-3ca142e0.css +++ /dev/null @@ -1 +0,0 @@ -.spacer.svelte-1kspdo{display:inline-block;width:0;height:0}.json-node.svelte-1kspdo{display:inline;color:var(--body-text-color);line-height:var(--line-sm);font-family:var(--font-mono)}.expand-array.svelte-1kspdo{border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);background:var(--background-fill-secondary);padding:0 var(--size-1);color:var(--body-text-color)}.expand-array.svelte-1kspdo:hover{background:var(--background-fill-primary)}.children.svelte-1kspdo{padding-left:var(--size-4)}.json-item.svelte-1kspdo{display:inline}.null.svelte-1kspdo{color:var(--body-text-color-subdued)}.string.svelte-1kspdo{color:var(--color-green-500)}.number.svelte-1kspdo{color:var(--color-blue-500)}.bool.svelte-1kspdo{color:var(--color-red-500)}.json-holder.svelte-1trjy9a{padding:var(--size-2)}button.svelte-1trjy9a{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);padding:5px;width:22px;height:22px;overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/__init__.py deleted file mode 100644 index 989e92c3458681a6f0be72ae4105ea742750d328..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# A highish-level implementation of the HTTP/1.1 wire protocol (RFC 7230), -# containing no networking code at all, loosely modelled on hyper-h2's generic -# implementation of HTTP/2 (and in particular the h2.connection.H2Connection -# class). There's still a bunch of subtle details you need to get right if you -# want to make this actually useful, because it doesn't implement all the -# semantics to check that what you're asking to write to the wire is sensible, -# but at least it gets you out of dealing with the wire itself. - -from h11._connection import Connection, NEED_DATA, PAUSED -from h11._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from h11._state import ( - CLIENT, - CLOSED, - DONE, - ERROR, - IDLE, - MIGHT_SWITCH_PROTOCOL, - MUST_CLOSE, - SEND_BODY, - SEND_RESPONSE, - SERVER, - SWITCHED_PROTOCOL, -) -from h11._util import LocalProtocolError, ProtocolError, RemoteProtocolError -from h11._version import __version__ - -PRODUCT_ID = "python-h11/" + __version__ - - -__all__ = ( - "Connection", - "NEED_DATA", - "PAUSED", - "ConnectionClosed", - "Data", - "EndOfMessage", - "Event", - "InformationalResponse", - "Request", - "Response", - "CLIENT", - "CLOSED", - "DONE", - "ERROR", - "IDLE", - "MUST_CLOSE", - "SEND_BODY", - "SEND_RESPONSE", - "SERVER", - "SWITCHED_PROTOCOL", - "ProtocolError", - "LocalProtocolError", - "RemoteProtocolError", -) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_backends/anyio.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_backends/anyio.py deleted file mode 100644 index 1ed5228dbde1732de50677e9a3bd6f04a3017433..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_backends/anyio.py +++ /dev/null @@ -1,145 +0,0 @@ -import ssl -import typing - -import anyio - -from .._exceptions import ( - ConnectError, - ConnectTimeout, - ReadError, - ReadTimeout, - WriteError, - WriteTimeout, - map_exceptions, -) -from .._utils import is_socket_readable -from .base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream - - -class AnyIOStream(AsyncNetworkStream): - def __init__(self, stream: anyio.abc.ByteStream) -> None: - self._stream = stream - - async def read( - self, max_bytes: int, timeout: typing.Optional[float] = None - ) -> bytes: - exc_map = { - TimeoutError: ReadTimeout, - anyio.BrokenResourceError: ReadError, - anyio.ClosedResourceError: ReadError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - try: - return await self._stream.receive(max_bytes=max_bytes) - except anyio.EndOfStream: # pragma: nocover - return b"" - - async def write( - self, buffer: bytes, timeout: typing.Optional[float] = None - ) -> None: - if not buffer: - return - - exc_map = { - TimeoutError: WriteTimeout, - anyio.BrokenResourceError: WriteError, - anyio.ClosedResourceError: WriteError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - await self._stream.send(item=buffer) - - async def aclose(self) -> None: - await self._stream.aclose() - - async def start_tls( - self, - ssl_context: ssl.SSLContext, - server_hostname: typing.Optional[str] = None, - timeout: typing.Optional[float] = None, - ) -> AsyncNetworkStream: - exc_map = { - TimeoutError: ConnectTimeout, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - try: - with anyio.fail_after(timeout): - ssl_stream = await anyio.streams.tls.TLSStream.wrap( - self._stream, - ssl_context=ssl_context, - hostname=server_hostname, - standard_compatible=False, - server_side=False, - ) - except Exception as exc: # pragma: nocover - await self.aclose() - raise exc - return AnyIOStream(ssl_stream) - - def get_extra_info(self, info: str) -> typing.Any: - if info == "ssl_object": - return self._stream.extra(anyio.streams.tls.TLSAttribute.ssl_object, None) - if info == "client_addr": - return self._stream.extra(anyio.abc.SocketAttribute.local_address, None) - if info == "server_addr": - return self._stream.extra(anyio.abc.SocketAttribute.remote_address, None) - if info == "socket": - return self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None) - if info == "is_readable": - sock = self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None) - return is_socket_readable(sock) - return None - - -class AnyIOBackend(AsyncNetworkBackend): - async def connect_tcp( - self, - host: str, - port: int, - timeout: typing.Optional[float] = None, - local_address: typing.Optional[str] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: - if socket_options is None: - socket_options = [] # pragma: no cover - exc_map = { - TimeoutError: ConnectTimeout, - OSError: ConnectError, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - stream: anyio.abc.ByteStream = await anyio.connect_tcp( - remote_host=host, - remote_port=port, - local_host=local_address, - ) - # By default TCP sockets opened in `asyncio` include TCP_NODELAY. - for option in socket_options: - stream._raw_socket.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return AnyIOStream(stream) - - async def connect_unix_socket( - self, - path: str, - timeout: typing.Optional[float] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: # pragma: nocover - if socket_options is None: - socket_options = [] - exc_map = { - TimeoutError: ConnectTimeout, - OSError: ConnectError, - anyio.BrokenResourceError: ConnectError, - } - with map_exceptions(exc_map): - with anyio.fail_after(timeout): - stream: anyio.abc.ByteStream = await anyio.connect_unix(path) - for option in socket_options: - stream._raw_socket.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return AnyIOStream(stream) - - async def sleep(self, seconds: float) -> None: - await anyio.sleep(seconds) # pragma: nocover diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_ssl.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_ssl.py deleted file mode 100644 index c99c5a67945b8a3a3544d481e979c791ab45fe23..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_ssl.py +++ /dev/null @@ -1,9 +0,0 @@ -import ssl - -import certifi - - -def default_ssl_context() -> ssl.SSLContext: - context = ssl.create_default_context() - context.load_verify_locations(certifi.where()) - return context diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py deleted file mode 100644 index 9d7d771436495dd293ea97841d7330f106fc45cd..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py +++ /dev/null @@ -1,1056 +0,0 @@ -import codecs -import datetime -import functools -from io import BytesIO -import logging -import math -import os -import pathlib -import re -import shutil -import subprocess -from tempfile import TemporaryDirectory -import weakref - -from PIL import Image - -import matplotlib as mpl -from matplotlib import _api, cbook, font_manager as fm -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, RendererBase -) -from matplotlib.backends.backend_mixed import MixedModeRenderer -from matplotlib.backends.backend_pdf import ( - _create_pdf_info_dict, _datetime_to_pdf) -from matplotlib.path import Path -from matplotlib.figure import Figure -from matplotlib._pylab_helpers import Gcf - -_log = logging.getLogger(__name__) - - -# Note: When formatting floating point values, it is important to use the -# %f/{:f} format rather than %s/{} to avoid triggering scientific notation, -# which is not recognized by TeX. - - -@_api.caching_module_getattr -class __getattr__: - NO_ESCAPE = _api.deprecated("3.6", obj_type="")( - property(lambda self: _NO_ESCAPE)) - re_mathsep = _api.deprecated("3.6", obj_type="")( - property(lambda self: _split_math.__self__)) - - -@_api.deprecated("3.6") -def get_fontspec(): - """Build fontspec preamble from rc.""" - with mpl.rc_context({"pgf.preamble": ""}): - return _get_preamble() - - -@_api.deprecated("3.6") -def get_preamble(): - """Get LaTeX preamble from rc.""" - return mpl.rcParams["pgf.preamble"] - - -def _get_preamble(): - """Prepare a LaTeX preamble based on the rcParams configuration.""" - preamble = [mpl.rcParams["pgf.preamble"]] - if mpl.rcParams["pgf.texsystem"] != "pdflatex": - preamble.append("\\usepackage{fontspec}") - if mpl.rcParams["pgf.rcfonts"]: - families = ["serif", "sans\\-serif", "monospace"] - commands = ["setmainfont", "setsansfont", "setmonofont"] - for family, command in zip(families, commands): - # 1) Forward slashes also work on Windows, so don't mess with - # backslashes. 2) The dirname needs to include a separator. - path = pathlib.Path(fm.findfont(family)) - preamble.append(r"\%s{%s}[Path=\detokenize{%s/}]" % ( - command, path.name, path.parent.as_posix())) - preamble.append(mpl.texmanager._usepackage_if_not_loaded( - "underscore", option="strings")) # Documented as "must come last". - return "\n".join(preamble) - - -# It's better to use only one unit for all coordinates, since the -# arithmetic in latex seems to produce inaccurate conversions. -latex_pt_to_in = 1. / 72.27 -latex_in_to_pt = 1. / latex_pt_to_in -mpl_pt_to_in = 1. / 72. -mpl_in_to_pt = 1. / mpl_pt_to_in - - -_NO_ESCAPE = r"(? 3 else 1.0 - - if has_fill: - _writeln(self.fh, - r"\definecolor{currentfill}{rgb}{%f,%f,%f}" - % tuple(rgbFace[:3])) - _writeln(self.fh, r"\pgfsetfillcolor{currentfill}") - if has_fill and fillopacity != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % fillopacity) - - # linewidth and color - lw = gc.get_linewidth() * mpl_pt_to_in * latex_in_to_pt - stroke_rgba = gc.get_rgb() - _writeln(self.fh, r"\pgfsetlinewidth{%fpt}" % lw) - _writeln(self.fh, - r"\definecolor{currentstroke}{rgb}{%f,%f,%f}" - % stroke_rgba[:3]) - _writeln(self.fh, r"\pgfsetstrokecolor{currentstroke}") - if strokeopacity != 1.0: - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % strokeopacity) - - # line style - dash_offset, dash_list = gc.get_dashes() - if dash_list is None: - _writeln(self.fh, r"\pgfsetdash{}{0pt}") - else: - _writeln(self.fh, - r"\pgfsetdash{%s}{%fpt}" - % ("".join(r"{%fpt}" % dash for dash in dash_list), - dash_offset)) - - def _print_pgf_path(self, gc, path, transform, rgbFace=None): - f = 1. / self.dpi - # check for clip box / ignore clip for filled paths - bbox = gc.get_clip_rectangle() if gc else None - maxcoord = 16383 / 72.27 * self.dpi # Max dimensions in LaTeX. - if bbox and (rgbFace is None): - p1, p2 = bbox.get_points() - clip = (max(p1[0], -maxcoord), max(p1[1], -maxcoord), - min(p2[0], maxcoord), min(p2[1], maxcoord)) - else: - clip = (-maxcoord, -maxcoord, maxcoord, maxcoord) - # build path - for points, code in path.iter_segments(transform, clip=clip): - if code == Path.MOVETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathmoveto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CLOSEPOLY: - _writeln(self.fh, r"\pgfpathclose") - elif code == Path.LINETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathlineto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CURVE3: - cx, cy, px, py = tuple(points) - coords = cx * f, cy * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathquadraticcurveto" - r"{\pgfqpoint{%fin}{%fin}}{\pgfqpoint{%fin}{%fin}}" - % coords) - elif code == Path.CURVE4: - c1x, c1y, c2x, c2y, px, py = tuple(points) - coords = c1x * f, c1y * f, c2x * f, c2y * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathcurveto" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - % coords) - - # apply pgf decorators - sketch_params = gc.get_sketch_params() if gc else None - if sketch_params is not None: - # Only "length" directly maps to "segment length" in PGF's API. - # PGF uses "amplitude" to pass the combined deviation in both x- - # and y-direction, while matplotlib only varies the length of the - # wiggle along the line ("randomness" and "length" parameters) - # and has a separate "scale" argument for the amplitude. - # -> Use "randomness" as PRNG seed to allow the user to force the - # same shape on multiple sketched lines - scale, length, randomness = sketch_params - if scale is not None: - # make matplotlib and PGF rendering visually similar - length *= 0.5 - scale *= 2 - # PGF guarantees that repeated loading is a no-op - _writeln(self.fh, r"\usepgfmodule{decorations}") - _writeln(self.fh, r"\usepgflibrary{decorations.pathmorphing}") - _writeln(self.fh, r"\pgfkeys{/pgf/decoration/.cd, " - f"segment length = {(length * f):f}in, " - f"amplitude = {(scale * f):f}in}}") - _writeln(self.fh, f"\\pgfmathsetseed{{{int(randomness)}}}") - _writeln(self.fh, r"\pgfdecoratecurrentpath{random steps}") - - def _pgf_path_draw(self, stroke=True, fill=False): - actions = [] - if stroke: - actions.append("stroke") - if fill: - actions.append("fill") - _writeln(self.fh, r"\pgfusepath{%s}" % ",".join(actions)) - - def option_scale_image(self): - # docstring inherited - return True - - def option_image_nocomposite(self): - # docstring inherited - return not mpl.rcParams['image.composite_image'] - - def draw_image(self, gc, x, y, im, transform=None): - # docstring inherited - - h, w = im.shape[:2] - if w == 0 or h == 0: - return - - if not os.path.exists(getattr(self.fh, "name", "")): - raise ValueError( - "streamed pgf-code does not support raster graphics, consider " - "using the pgf-to-pdf option") - - # save the images to png files - path = pathlib.Path(self.fh.name) - fname_img = "%s-img%d.png" % (path.stem, self.image_counter) - Image.fromarray(im[::-1]).save(path.parent / fname_img) - self.image_counter += 1 - - # reference the image in the pgf picture - _writeln(self.fh, r"\begin{pgfscope}") - self._print_pgf_clip(gc) - f = 1. / self.dpi # from display coords to inch - if transform is None: - _writeln(self.fh, - r"\pgfsys@transformshift{%fin}{%fin}" % (x * f, y * f)) - w, h = w * f, h * f - else: - tr1, tr2, tr3, tr4, tr5, tr6 = transform.frozen().to_values() - _writeln(self.fh, - r"\pgfsys@transformcm{%f}{%f}{%f}{%f}{%fin}{%fin}" % - (tr1 * f, tr2 * f, tr3 * f, tr4 * f, - (tr5 + x) * f, (tr6 + y) * f)) - w = h = 1 # scale is already included in the transform - interp = str(transform is None).lower() # interpolation in PDF reader - _writeln(self.fh, - r"\pgftext[left,bottom]" - r"{%s[interpolate=%s,width=%fin,height=%fin]{%s}}" % - (_get_image_inclusion_command(), - interp, w, h, fname_img)) - _writeln(self.fh, r"\end{pgfscope}") - - def draw_tex(self, gc, x, y, s, prop, angle, *, mtext=None): - # docstring inherited - self.draw_text(gc, x, y, s, prop, angle, ismath="TeX", mtext=mtext) - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - # prepare string for tex - s = _escape_and_apply_props(s, prop) - - _writeln(self.fh, r"\begin{pgfscope}") - - alpha = gc.get_alpha() - if alpha != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % alpha) - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % alpha) - rgb = tuple(gc.get_rgb())[:3] - _writeln(self.fh, r"\definecolor{textcolor}{rgb}{%f,%f,%f}" % rgb) - _writeln(self.fh, r"\pgfsetstrokecolor{textcolor}") - _writeln(self.fh, r"\pgfsetfillcolor{textcolor}") - s = r"\color{textcolor}" + s - - dpi = self.figure.dpi - text_args = [] - if mtext and ( - (angle == 0 or - mtext.get_rotation_mode() == "anchor") and - mtext.get_verticalalignment() != "center_baseline"): - # if text anchoring can be supported, get the original coordinates - # and add alignment information - pos = mtext.get_unitless_position() - x, y = mtext.get_transform().transform(pos) - halign = {"left": "left", "right": "right", "center": ""} - valign = {"top": "top", "bottom": "bottom", - "baseline": "base", "center": ""} - text_args.extend([ - f"x={x/dpi:f}in", - f"y={y/dpi:f}in", - halign[mtext.get_horizontalalignment()], - valign[mtext.get_verticalalignment()], - ]) - else: - # if not, use the text layout provided by Matplotlib. - text_args.append(f"x={x/dpi:f}in, y={y/dpi:f}in, left, base") - - if angle != 0: - text_args.append("rotate=%f" % angle) - - _writeln(self.fh, r"\pgftext[%s]{%s}" % (",".join(text_args), s)) - _writeln(self.fh, r"\end{pgfscope}") - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - # get text metrics in units of latex pt, convert to display units - w, h, d = (LatexManager._get_cached_or_new() - .get_width_height_descent(s, prop)) - # TODO: this should be latex_pt_to_in instead of mpl_pt_to_in - # but having a little bit more space around the text looks better, - # plus the bounding box reported by LaTeX is VERY narrow - f = mpl_pt_to_in * self.dpi - return w * f, h * f, d * f - - def flipy(self): - # docstring inherited - return False - - def get_canvas_width_height(self): - # docstring inherited - return (self.figure.get_figwidth() * self.dpi, - self.figure.get_figheight() * self.dpi) - - def points_to_pixels(self, points): - # docstring inherited - return points * mpl_pt_to_in * self.dpi - - -class FigureCanvasPgf(FigureCanvasBase): - filetypes = {"pgf": "LaTeX PGF picture", - "pdf": "LaTeX compiled PGF picture", - "png": "Portable Network Graphics", } - - def get_default_filetype(self): - return 'pdf' - - def _print_pgf_to_fh(self, fh, *, bbox_inches_restore=None): - - header_text = """%% Creator: Matplotlib, PGF backend -%% -%% To include the figure in your LaTeX document, write -%% \\input{.pgf} -%% -%% Make sure the required packages are loaded in your preamble -%% \\usepackage{pgf} -%% -%% Also ensure that all the required font packages are loaded; for instance, -%% the lmodern package is sometimes necessary when using math font. -%% \\usepackage{lmodern} -%% -%% Figures using additional raster images can only be included by \\input if -%% they are in the same directory as the main LaTeX file. For loading figures -%% from other directories you can use the `import` package -%% \\usepackage{import} -%% -%% and then include the figures with -%% \\import{}{.pgf} -%% -""" - - # append the preamble used by the backend as a comment for debugging - header_info_preamble = ["%% Matplotlib used the following preamble"] - for line in _get_preamble().splitlines(): - header_info_preamble.append("%% " + line) - header_info_preamble.append("%%") - header_info_preamble = "\n".join(header_info_preamble) - - # get figure size in inch - w, h = self.figure.get_figwidth(), self.figure.get_figheight() - dpi = self.figure.dpi - - # create pgfpicture environment and write the pgf code - fh.write(header_text) - fh.write(header_info_preamble) - fh.write("\n") - _writeln(fh, r"\begingroup") - _writeln(fh, r"\makeatletter") - _writeln(fh, r"\begin{pgfpicture}") - _writeln(fh, - r"\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{%fin}{%fin}}" - % (w, h)) - _writeln(fh, r"\pgfusepath{use as bounding box, clip}") - renderer = MixedModeRenderer(self.figure, w, h, dpi, - RendererPgf(self.figure, fh), - bbox_inches_restore=bbox_inches_restore) - self.figure.draw(renderer) - - # end the pgfpicture environment - _writeln(fh, r"\end{pgfpicture}") - _writeln(fh, r"\makeatother") - _writeln(fh, r"\endgroup") - - def print_pgf(self, fname_or_fh, **kwargs): - """ - Output pgf macros for drawing the figure so it can be included and - rendered in latex documents. - """ - with cbook.open_file_cm(fname_or_fh, "w", encoding="utf-8") as file: - if not cbook.file_requires_unicode(file): - file = codecs.getwriter("utf-8")(file) - self._print_pgf_to_fh(file, **kwargs) - - def print_pdf(self, fname_or_fh, *, metadata=None, **kwargs): - """Use LaTeX to compile a pgf generated figure to pdf.""" - w, h = self.figure.get_size_inches() - - info_dict = _create_pdf_info_dict('pgf', metadata or {}) - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in info_dict.items()) - - # print figure to pgf and compile it with latex - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - self.print_pgf(tmppath / "figure.pgf", **kwargs) - (tmppath / "figure.tex").write_text( - "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (w, h), - r"\usepackage{pgf}", - _get_preamble(), - r"\begin{document}", - r"\centering", - r"\input{figure.pgf}", - r"\end{document}", - ]), encoding="utf-8") - texcommand = mpl.rcParams["pgf.texsystem"] - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - "figure.tex"], _log, cwd=tmpdir) - with (tmppath / "figure.pdf").open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def print_png(self, fname_or_fh, **kwargs): - """Use LaTeX to compile a pgf figure to pdf and convert it to png.""" - converter = make_pdf_to_png_converter() - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - pdf_path = tmppath / "figure.pdf" - png_path = tmppath / "figure.png" - self.print_pdf(pdf_path, **kwargs) - converter(pdf_path, png_path, dpi=self.figure.dpi) - with png_path.open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def get_renderer(self): - return RendererPgf(self.figure, None) - - def draw(self): - self.figure.draw_without_rendering() - return super().draw() - - -FigureManagerPgf = FigureManagerBase - - -@_Backend.export -class _BackendPgf(_Backend): - FigureCanvas = FigureCanvasPgf - - -class PdfPages: - """ - A multi-page PDF file using the pgf backend - - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> # Initialize: - >>> with PdfPages('foo.pdf') as pdf: - ... # As many times as you like, create a figure fig and save it: - ... fig = plt.figure() - ... pdf.savefig(fig) - ... # When no figure is specified the current figure is saved - ... pdf.savefig() - """ - __slots__ = ( - '_output_name', - 'keep_empty', - '_n_figures', - '_file', - '_info_dict', - '_metadata', - ) - - def __init__(self, filename, *, keep_empty=True, metadata=None): - """ - Create a new PdfPages object. - - Parameters - ---------- - filename : str or path-like - Plots using `PdfPages.savefig` will be written to a file at this - location. Any older file with the same name is overwritten. - - keep_empty : bool, default: True - If set to False, then empty pdf files will be deleted automatically - when closed. - - metadata : dict, optional - Information dictionary object (see PDF reference section 10.2.1 - 'Document Information Dictionary'), e.g.: - ``{'Creator': 'My software', 'Author': 'Me', 'Title': 'Awesome'}``. - - The standard keys are 'Title', 'Author', 'Subject', 'Keywords', - 'Creator', 'Producer', 'CreationDate', 'ModDate', and - 'Trapped'. Values have been predefined for 'Creator', 'Producer' - and 'CreationDate'. They can be removed by setting them to `None`. - - Note that some versions of LaTeX engines may ignore the 'Producer' - key and set it to themselves. - """ - self._output_name = filename - self._n_figures = 0 - self.keep_empty = keep_empty - self._metadata = (metadata or {}).copy() - self._info_dict = _create_pdf_info_dict('pgf', self._metadata) - self._file = BytesIO() - - def _write_header(self, width_inches, height_inches): - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in self._info_dict.items()) - latex_header = "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (width_inches, height_inches), - r"\usepackage{pgf}", - _get_preamble(), - r"\setlength{\parindent}{0pt}", - r"\begin{document}%", - ]) - self._file.write(latex_header.encode('utf-8')) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - - def close(self): - """ - Finalize this object, running LaTeX in a temporary directory - and moving the final pdf file to *filename*. - """ - self._file.write(rb'\end{document}\n') - if self._n_figures > 0: - self._run_latex() - elif self.keep_empty: - open(self._output_name, 'wb').close() - self._file.close() - - def _run_latex(self): - texcommand = mpl.rcParams["pgf.texsystem"] - with TemporaryDirectory() as tmpdir: - tex_source = pathlib.Path(tmpdir, "pdf_pages.tex") - tex_source.write_bytes(self._file.getvalue()) - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - tex_source], - _log, cwd=tmpdir) - shutil.move(tex_source.with_suffix(".pdf"), self._output_name) - - def savefig(self, figure=None, **kwargs): - """ - Save a `.Figure` to this file as a new page. - - Any other keyword arguments are passed to `~.Figure.savefig`. - - Parameters - ---------- - figure : `.Figure` or int, default: the active figure - The figure, or index of the figure, that is saved to the file. - """ - if not isinstance(figure, Figure): - if figure is None: - manager = Gcf.get_active() - else: - manager = Gcf.get_fig_manager(figure) - if manager is None: - raise ValueError("No figure {}".format(figure)) - figure = manager.canvas.figure - - try: - orig_canvas = figure.canvas - figure.canvas = FigureCanvasPgf(figure) - - width, height = figure.get_size_inches() - if self._n_figures == 0: - self._write_header(width, height) - else: - # \pdfpagewidth and \pdfpageheight exist on pdftex, xetex, and - # luatex<0.85; they were renamed to \pagewidth and \pageheight - # on luatex>=0.85. - self._file.write( - br'\newpage' - br'\ifdefined\pdfpagewidth\pdfpagewidth' - br'\else\pagewidth\fi=%ain' - br'\ifdefined\pdfpageheight\pdfpageheight' - br'\else\pageheight\fi=%ain' - b'%%\n' % (width, height) - ) - - figure.savefig(self._file, format="pgf", **kwargs) - self._n_figures += 1 - finally: - figure.canvas = orig_canvas - - def get_pagecount(self): - """Return the current number of pages in the multipage pdf file.""" - return self._n_figures diff --git a/spaces/decluster/airplane_yolov5/README.md b/spaces/decluster/airplane_yolov5/README.md deleted file mode 100644 index 76dba041e9696d0a420fac77523822e5cf4d1bd1..0000000000000000000000000000000000000000 --- a/spaces/decluster/airplane_yolov5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Airplane Yolov5 -emoji: 💻 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/deprem-ml/deprem_satellite_test/download.py b/spaces/deprem-ml/deprem_satellite_test/download.py deleted file mode 100644 index bff2483dc9ef1ff1c94fb1e10520446f312c0ac7..0000000000000000000000000000000000000000 --- a/spaces/deprem-ml/deprem_satellite_test/download.py +++ /dev/null @@ -1,17 +0,0 @@ -def attempt_download_from_hub(repo_id, hf_token=None): - # https://github.com/fcakyon/yolov5-pip/blob/main/yolov5/utils/downloads.py - from huggingface_hub import hf_hub_download, list_repo_files - from huggingface_hub.utils._errors import RepositoryNotFoundError - from huggingface_hub.utils._validators import HFValidationError - try: - repo_files = list_repo_files(repo_id=repo_id, repo_type='model', token=hf_token) - model_file = [f for f in repo_files if f.endswith('.pth')][0] - file = hf_hub_download( - repo_id=repo_id, - filename=model_file, - repo_type='model', - token=hf_token, - ) - return file - except (RepositoryNotFoundError, HFValidationError): - return None diff --git a/spaces/dgongor/WhisperDemo/README.md b/spaces/dgongor/WhisperDemo/README.md deleted file mode 100644 index 60b5edbd9e61f92e547f2b9e716bcb399abe14f3..0000000000000000000000000000000000000000 --- a/spaces/dgongor/WhisperDemo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WhisperDemo -emoji: 🌖 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false -duplicated_from: hwberry2/WhisperDemo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (kastor All Video Downloader Key Crack).md b/spaces/diacanFperku/AutoGPT/HD Online Player (kastor All Video Downloader Key Crack).md deleted file mode 100644 index 4729ff122beec5c1edccb129b5958dded39b68c9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (kastor All Video Downloader Key Crack).md +++ /dev/null @@ -1,6 +0,0 @@ -

HD Online Player (kastor all video downloader key crack)


Download Filehttps://gohhs.com/2uFVLV



-
- d5da3c52bf
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Keil Mdk Arm 460 Crack Fix.md b/spaces/diacanFperku/AutoGPT/Keil Mdk Arm 460 Crack Fix.md deleted file mode 100644 index e69735ae0cbbc1c613f2266706b0e8b13b3f9a2f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Keil Mdk Arm 460 Crack Fix.md +++ /dev/null @@ -1,6 +0,0 @@ -

Keil Mdk Arm 460 Crack


Download File 🆓 https://gohhs.com/2uFT1V



- - d5da3c52bf
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/OMG Oh My God Sequel 3 Full Movie Download In Hd 720p.md b/spaces/diacanFperku/AutoGPT/OMG Oh My God Sequel 3 Full Movie Download In Hd 720p.md deleted file mode 100644 index fa97b10c91665cb0d76fa11708da13bc6a453123..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/OMG Oh My God Sequel 3 Full Movie Download In Hd 720p.md +++ /dev/null @@ -1,11 +0,0 @@ -

OMG Oh My God Sequel 3 Full Movie Download In Hd 720p


Download File >>> https://gohhs.com/2uFVen



-
-This film is a remake of the 2012 Hindi film Oh My God and tells the story of an atheist, Gopal Rao, who is suing God... Directed by Ramu Nath Gopal Genre drama, romance Year of release 2017 Actors Vidya Balan, Ajit Vachani, Suraj Sharma, Kareena Kapoor, Om Puri Premiere July 15, 2017 -About the film "Why doesn't God give me what I want?" -This is a story about a young man named Gopal. -The main character works as a lawyer in one of the courts in Mumbai. -One day, he encounters an inexplicable phenomenon. -Gopal has one dream and he decides to do everything possible and impossible to achieve it. 8a78ff9644
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup !!BETTER!! Free.md b/spaces/diacanFperku/AutoGPT/Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup !!BETTER!! Free.md deleted file mode 100644 index 9c294791629ba3b07bf5e1cd6527241ff516155c..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup !!BETTER!! Free.md +++ /dev/null @@ -1,11 +0,0 @@ -
-

Free Download Software Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free Call Of Duty 4 World At War. Ctrl Alt Del, download eidf, msie, Upcoming Books, permanent deactivation through the secured connection, challenge redeemer code and activation. You will be directed to the site where the free download software is available for you to download.

-

Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free


DOWNLOAD ===== https://gohhs.com/2uFUS8



-

the audio-visual image is the latest version, you will direct to the site where you can download the full version of the software. Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free 3, 5, 6, 7 and Latest Free Download. Directly Free Download 32, 64 bit and Classic using below URL.

-

Enables the Download Manager to access the entire web to download the latest Free Software. It is also activated by default, but you can select the Live Activation option to use a personalized activation code. Users who prefer a secure connection can activate the software using their activation codes. Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free. This Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free.

-

ubm for pixlr web designer 1.5 virtual id key code new year 2019. Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free allucraft crystal pro 2019 c drive torrent iso 32 bit free download.

-

VLC media player 1.2.6. dll: free download image of pear sketchup 2018 crack full cracked-6.3.. 9ff8810a3977c4147f2377f3c1b5acff8b029397. Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free or you just simply like below pictures.

-

-

Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free pxl file free download for. Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free not working on your machine? Pinnacle.Studio.Media.Suite.v10.1.Multilanguage Setup Free how to do.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/losses.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/README.md b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/README.md deleted file mode 100644 index 14522801290dd26e9ca5366e8112280afa3ebff0..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI甜甜叫花鸡 -emoji: 🌟 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/digitalxingtong/Nailv-Bert-Vits2/modules.py b/spaces/digitalxingtong/Nailv-Bert-Vits2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-Bert-Vits2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/dolceschokolade/chatbot-mini/types/openai.ts b/spaces/dolceschokolade/chatbot-mini/types/openai.ts deleted file mode 100644 index 21e4c6d577275a387db7e73bfe2ebee27ee66e65..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/openai.ts +++ /dev/null @@ -1,52 +0,0 @@ -import { OPENAI_API_TYPE } from '../utils/app/const'; - -export interface OpenAIModel { - id: string; - name: string; - maxLength: number; // maximum length of a message - tokenLimit: number; -} - -export enum OpenAIModelID { - Open_LLaMA = 'open-llama-7b', - GPT_3_5 = 'gpt-3.5-turbo', - GPT_3_5_AZ = 'gpt-35-turbo', - GPT_4 = 'gpt-4', - GPT_4_32K = 'gpt-4-32k', -} - -// in case the `DEFAULT_MODEL` environment variable is not set or set to an unsupported model -export const fallbackModelID = OpenAIModelID.Open_LLaMA; - -export const OpenAIModels: Record = { - [OpenAIModelID.Open_LLaMA]: { - id: OpenAIModelID.Open_LLaMA, - name: 'open-llama-7b', - maxLength: 12000, - tokenLimit: 2048, - }, - [OpenAIModelID.GPT_3_5]: { - id: OpenAIModelID.GPT_3_5, - name: 'open-llama-7b', - maxLength: 12000, - tokenLimit: 4000, - }, - [OpenAIModelID.GPT_3_5_AZ]: { - id: OpenAIModelID.GPT_3_5_AZ, - name: 'open-llama-7b', - maxLength: 12000, - tokenLimit: 4000, - }, - [OpenAIModelID.GPT_4]: { - id: OpenAIModelID.GPT_4, - name: 'GPT-4', - maxLength: 24000, - tokenLimit: 8000, - }, - [OpenAIModelID.GPT_4_32K]: { - id: OpenAIModelID.GPT_4_32K, - name: 'GPT-4-32K', - maxLength: 96000, - tokenLimit: 32000, - }, -}; diff --git a/spaces/doluvor/faster-whisper-webui/src/hooks/progressListener.py b/spaces/doluvor/faster-whisper-webui/src/hooks/progressListener.py deleted file mode 100644 index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/src/hooks/progressListener.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import Union - -class ProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - self.total = total - - def on_finished(self): - pass \ No newline at end of file diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/text/symbols.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/text/symbols.py deleted file mode 100644 index 053a7105f7ce95aa51614f6995399fa2172b3eb2..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/text/symbols.py +++ /dev/null @@ -1,76 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' - - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -'''# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' -''' - -'''# sanskrit_cleaners -_pad = '_' -_punctuation = '।' -_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ ' -''' - -'''# cjks_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ ' -''' - -'''# thai_cleaners -_pad = '_' -_punctuation = '.!? ' -_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์' -''' - -'''# cjke_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ ' -''' - -'''# shanghainese_cleaners -_pad = '_' -_punctuation = ',.!?…' -_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 ' -''' - -'''# chinese_dialect_cleaners -_pad = '_' -_punctuation = ',.!?~…─' -_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚ᴀᴇ↑↓∅ⱼ ' -''' - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/loading.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/loading.tsx deleted file mode 100644 index ac267d287842197c039498ed5fc175abcec3578c..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/loading.tsx +++ /dev/null @@ -1,49 +0,0 @@ -import { motion } from "framer-motion"; -import { FaSadCry } from "react-icons/fa"; -import classNames from "classnames"; - -interface Props { - prompt: string; - error?: string; - className?: string; -} - -export const CollectionLoading: React.FC = ({ - prompt, - error, - className, -}) => { - return ( -
- - {error ? ( - - ) : ( -
- - - -
- )} -

- {error - ? error - : prompt?.length > 180 - ? `${prompt.slice(0, 180)}...` - : prompt} -

-
-
- ); -}; diff --git "a/spaces/erbanku/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/erbanku/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index f1fe20171cc54aec0c79f4961e71b57845f252d5..0000000000000000000000000000000000000000 --- "a/spaces/erbanku/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/ezioruan/roop/roop/capturer.py b/spaces/ezioruan/roop/roop/capturer.py deleted file mode 100644 index fd49d468dd4cd45832ab9612205968207a6f45cf..0000000000000000000000000000000000000000 --- a/spaces/ezioruan/roop/roop/capturer.py +++ /dev/null @@ -1,20 +0,0 @@ -from typing import Any -import cv2 - - -def get_video_frame(video_path: str, frame_number: int = 0) -> Any: - capture = cv2.VideoCapture(video_path) - frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT) - capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1)) - has_frame, frame = capture.read() - capture.release() - if has_frame: - return frame - return None - - -def get_video_frame_total(video_path: str) -> int: - capture = cv2.VideoCapture(video_path) - video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - capture.release() - return video_frame_total diff --git a/spaces/f2api/gpt-academic/crazy_functions/__init__.py b/spaces/f2api/gpt-academic/crazy_functions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/facebook/MusicGen/Makefile b/spaces/facebook/MusicGen/Makefile deleted file mode 100644 index 3a4910066583dc22f06f5ec2d5711367c941c86b..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/Makefile +++ /dev/null @@ -1,40 +0,0 @@ -INTEG=AUDIOCRAFT_DORA_DIR="/tmp/magma_$(USER)" python3 -m dora -v run --clear device=cpu dataset.num_workers=0 optim.epochs=1 \ - dataset.train.num_samples=10 dataset.valid.num_samples=10 \ - dataset.evaluate.num_samples=10 dataset.generate.num_samples=2 sample_rate=16000 \ - logging.level=DEBUG -INTEG_COMPRESSION = $(INTEG) solver=compression/debug rvq.n_q=2 rvq.bins=48 checkpoint.save_last=true # SIG is 5091833e -INTEG_MUSICGEN = $(INTEG) solver=musicgen/debug dset=audio/example compression_model_checkpoint=//sig/5091833e \ - transformer_lm.n_q=2 transformer_lm.card=48 transformer_lm.dim=16 checkpoint.save_last=false # Using compression model from 5091833e -INTEG_AUDIOGEN = $(INTEG) solver=audiogen/debug dset=audio/example compression_model_checkpoint=//sig/5091833e \ - transformer_lm.n_q=2 transformer_lm.card=48 transformer_lm.dim=16 checkpoint.save_last=false # Using compression model from 5091833e -INTEG_MBD = $(INTEG) solver=diffusion/debug dset=audio/example \ - checkpoint.save_last=false # Using compression model from 616d7b3c - -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report - -tests_integ: - $(INTEG_COMPRESSION) - $(INTEG_MBD) - $(INTEG_MUSICGEN) - $(INTEG_AUDIOGEN) - - -api_docs: - pdoc3 --html -o api_docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests api_docs dist diff --git a/spaces/facebook/MusicGen/audiocraft/models/lm.py b/spaces/facebook/MusicGen/audiocraft/models/lm.py deleted file mode 100644 index c4ea2e5e800128c78226aed887fde46930adc817..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/models/lm.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (int, optional): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (int, optional): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (float, optional): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (str, optional): Method for weight initialization. - depthwise_init (str, optional): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (str, optional): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (str, optional): Depthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initialize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): Indices of the codes to model. - conditions (list of ConditioningAttributes): Conditions to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType], optional): Pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, "Sequence shape must match the specified number of codebooks" - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list of ConditioningAttributes): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType], optional): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: tp.Optional[bool] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float, optional): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple), type(cfg_conditions) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: tp.Optional[bool] = None, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (torch.Tensor, optional): Prompt tokens of shape [B, K, T]. - conditions_tensors (list of ConditioningAttributes, optional): List of conditions. - num_samples (int, optional): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coeff (float, optional): Classifier-free guidance coefficient. - two_step_cfg (bool, optional): Whether to perform classifier-free guidance with two steps generation. - remove_prompts (bool): Whether to remove prompts from generation or not. - check (bool): Whether to apply further checks on generated sequence. - callback (Callback, optional): Callback function to report generation progress. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistent. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsistent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train and test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef, two_step_cfg=two_step_cfg) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/facebook/StyleNeRF/torch_utils/distributed_utils.py b/spaces/facebook/StyleNeRF/torch_utils/distributed_utils.py deleted file mode 100644 index b9983e4595618e080a15796670c98037cf691c3b..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/torch_utils/distributed_utils.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import logging -import os -import pickle -import random -import socket -import struct -import subprocess -import warnings -import tempfile -import uuid - - -from datetime import date -from pathlib import Path -from collections import OrderedDict -from typing import Any, Dict, Mapping - -import torch -import torch.distributed as dist - - -logger = logging.getLogger(__name__) - - -def is_master(args): - return args.distributed_rank == 0 - - -def init_distributed_mode(rank, args): - if "WORLD_SIZE" in os.environ: - args.world_size = int(os.environ["WORLD_SIZE"]) - - if args.launcher == 'spawn': # single node with multiprocessing.spawn - args.world_size = args.num_gpus - args.rank = rank - args.gpu = rank - - elif 'RANK' in os.environ: - args.rank = int(os.environ["RANK"]) - args.gpu = int(os.environ['LOCAL_RANK']) - - elif 'SLURM_PROCID' in os.environ: - args.rank = int(os.environ['SLURM_PROCID']) - args.gpu = args.rank % torch.cuda.device_count() - - if args.world_size == 1: - return - - if 'MASTER_ADDR' in os.environ: - args.dist_url = 'tcp://{}:{}'.format(os.environ['MASTER_ADDR'], os.environ['MASTER_PORT']) - - print(f'gpu={args.gpu}, rank={args.rank}, world_size={args.world_size}') - args.distributed = True - torch.cuda.set_device(args.gpu) - args.dist_backend = 'nccl' - print('| distributed init (rank {}): {}'.format(args.rank, args.dist_url), flush=True) - - torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - torch.distributed.barrier() - - -def gather_list_and_concat(tensor): - gather_t = [torch.ones_like(tensor) for _ in range(dist.get_world_size())] - dist.all_gather(gather_t, tensor) - return torch.cat(gather_t) - - -def get_rank(): - return dist.get_rank() - - -def get_world_size(): - return dist.get_world_size() - - -def get_default_group(): - return dist.group.WORLD - - -def all_gather_list(data, group=None, max_size=16384): - """Gathers arbitrary data from all nodes into a list. - - Similar to :func:`~torch.distributed.all_gather` but for arbitrary Python - data. Note that *data* must be picklable. - - Args: - data (Any): data from the local worker to be gathered on other workers - group (optional): group of the collective - max_size (int, optional): maximum size of the data to be gathered - across workers - """ - rank = get_rank() - world_size = get_world_size() - - buffer_size = max_size * world_size - if not hasattr(all_gather_list, '_buffer') or \ - all_gather_list._buffer.numel() < buffer_size: - all_gather_list._buffer = torch.cuda.ByteTensor(buffer_size) - all_gather_list._cpu_buffer = torch.ByteTensor(max_size).pin_memory() - buffer = all_gather_list._buffer - buffer.zero_() - cpu_buffer = all_gather_list._cpu_buffer - - data = data.cpu() - enc = pickle.dumps(data) - enc_size = len(enc) - header_size = 4 # size of header that contains the length of the encoded data - size = header_size + enc_size - if size > max_size: - raise ValueError('encoded data size ({}) exceeds max_size ({})'.format(size, max_size)) - - header = struct.pack(">I", enc_size) - cpu_buffer[:size] = torch.ByteTensor(list(header + enc)) - start = rank * max_size - buffer[start:start + size].copy_(cpu_buffer[:size]) - - all_reduce(buffer, group=group) - - buffer = buffer.cpu() - try: - result = [] - for i in range(world_size): - out_buffer = buffer[i * max_size:(i + 1) * max_size] - enc_size, = struct.unpack(">I", bytes(out_buffer[:header_size].tolist())) - if enc_size > 0: - result.append(pickle.loads(bytes(out_buffer[header_size:header_size + enc_size].tolist()))) - return result - except pickle.UnpicklingError: - raise Exception( - 'Unable to unpickle data from other workers. all_gather_list requires all ' - 'workers to enter the function together, so this error usually indicates ' - 'that the workers have fallen out of sync somehow. Workers can fall out of ' - 'sync if one of them runs out of memory, or if there are other conditions ' - 'in your training script that can cause one worker to finish an epoch ' - 'while other workers are still iterating over their portions of the data. ' - 'Try rerunning with --ddp-backend=no_c10d and see if that helps.' - ) - - -def all_reduce_dict( - data: Mapping[str, Any], - device, - group=None, -) -> Dict[str, Any]: - """ - AllReduce a dictionary of values across workers. We separately - reduce items that are already on the device and items on CPU for - better performance. - - Args: - data (Mapping[str, Any]): dictionary of data to all-reduce, but - cannot be a nested dictionary - device (torch.device): device for the reduction - group (optional): group of the collective - """ - data_keys = list(data.keys()) - - # We want to separately reduce items that are already on the - # device and items on CPU for performance reasons. - cpu_data = OrderedDict() - device_data = OrderedDict() - for k in data_keys: - t = data[k] - if not torch.is_tensor(t): - cpu_data[k] = torch.tensor(t, dtype=torch.double) - elif t.device.type != device.type: - cpu_data[k] = t.to(dtype=torch.double) - else: - device_data[k] = t.to(dtype=torch.double) - - def _all_reduce_dict(data: OrderedDict): - if len(data) == 0: - return data - buf = torch.stack(list(data.values())).to(device=device) - all_reduce(buf, group=group) - return {k: buf[i] for i, k in enumerate(data)} - - cpu_data = _all_reduce_dict(cpu_data) - device_data = _all_reduce_dict(device_data) - - def get_from_stack(key): - if key in cpu_data: - return cpu_data[key] - elif key in device_data: - return device_data[key] - raise KeyError - - return OrderedDict([(key, get_from_stack(key)) for key in data_keys]) - - -def get_shared_folder() -> Path: - user = os.getenv("USER") - if Path("/checkpoint/").is_dir(): - p = Path(f"/checkpoint/{user}/experiments") - p.mkdir(exist_ok=True) - return p - else: - p = Path(f"/tmp/experiments") - p.mkdir(exist_ok=True) - return p - - -def get_init_file(): - # Init file must not exist, but it's parent dir must exist. - os.makedirs(str(get_shared_folder()), exist_ok=True) - init_file = Path(str(get_shared_folder()) + f"/{uuid.uuid4().hex}_init") - if init_file.exists(): - os.remove(str(init_file)) - return init_file - diff --git a/spaces/facebook/StyleNeRF/training/stylenerf.py b/spaces/facebook/StyleNeRF/training/stylenerf.py deleted file mode 100644 index 34ee799b3b30a67f66a7c7a2514c3ee47518aa1a..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/training/stylenerf.py +++ /dev/null @@ -1,2395 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from bdb import set_trace -import copy -from email import generator -import imp -import math -from platform import architecture - - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from torch.autograd import grad -from training.networks import * -from dnnlib.camera import * -from dnnlib.geometry import ( - positional_encoding, upsample, downsample -) -from dnnlib.util import dividable, hash_func, EasyDict -from torch_utils.ops.hash_sample import hash_sample -from torch_utils.ops.grid_sample_gradfix import grid_sample -from torch_utils.ops.nerf_utils import topp_masking -from einops import repeat, rearrange - - -# --------------------------------- basic modules ------------------------------------------- # -@persistence.persistent_class -class Style2Layer(nn.Module): - def __init__(self, - in_channels, - out_channels, - w_dim, - activation='lrelu', - resample_filter=[1,3,3,1], - magnitude_ema_beta = -1, # -1 means not using magnitude ema - **unused_kwargs): - - # simplified version of SynthesisLayer - # no noise, kernel size forced to be 1x1, used in NeRF block - super().__init__() - self.activation = activation - self.conv_clamp = None - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = 0 - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.w_dim = w_dim - self.in_features = in_channels - self.out_features = out_channels - memory_format = torch.contiguous_format - - if w_dim > 0: - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - self.weight = torch.nn.Parameter( - torch.randn([out_channels, in_channels, 1, 1]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - else: - self.weight = torch.nn.Parameter(torch.Tensor(out_channels, in_channels)) - self.bias = torch.nn.Parameter(torch.Tensor(out_channels)) - self.weight_gain = 1. - - # initialization - torch.nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) - fan_in, _ = torch.nn.init._calculate_fan_in_and_fan_out(self.weight) - bound = 1 / math.sqrt(fan_in) - torch.nn.init.uniform_(self.bias, -bound, bound) - - self.magnitude_ema_beta = magnitude_ema_beta - if magnitude_ema_beta > 0: - self.register_buffer('w_avg', torch.ones([])) - - def extra_repr(self) -> str: - return 'in_features={}, out_features={}, style={}'.format( - self.in_features, self.out_features, self.w_dim - ) - - def forward(self, x, w=None, fused_modconv=None, gain=1, up=1, **unused_kwargs): - flip_weight = True # (up == 1) # slightly faster HACK - act = self.activation - - if (self.magnitude_ema_beta > 0): - if self.training: # updating EMA. - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.w_avg.copy_(magnitude_cur.lerp(self.w_avg, self.magnitude_ema_beta)) - input_gain = self.w_avg.rsqrt() - x = x * input_gain - - if fused_modconv is None: - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - fused_modconv = not self.training - - if self.w_dim > 0: # modulated convolution - assert x.ndim == 4, "currently not support modulated MLP" - styles = self.affine(w) # Batch x style_dim - if x.size(0) > styles.size(0): - styles = repeat(styles, 'b c -> (b s) c', s=x.size(0) // styles.size(0)) - - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=None, up=up, - padding=self.padding, resample_filter=self.resample_filter, - flip_weight=flip_weight, fused_modconv=fused_modconv) - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to(x.dtype), act=act, gain=act_gain, clamp=act_clamp) - - else: - if x.ndim == 2: # MLP mode - x = F.relu(F.linear(x, self.weight, self.bias.to(x.dtype))) - else: - x = F.relu(F.conv2d(x, self.weight[:,:,None, None], self.bias)) - # x = bias_act.bias_act(x, self.bias.to(x.dtype), act='relu') - return x - - -@persistence.persistent_class -class SDFDensityLaplace(nn.Module): # alpha * Laplace(loc=0, scale=beta).cdf(-sdf) - def __init__(self, params_init={}, noise_std=0.0, beta_min=0.001, exp_beta=False): - super().__init__() - self.noise_std = noise_std - for p in params_init: - param = nn.Parameter(torch.tensor(params_init[p])) - setattr(self, p, param) - self.beta_min = beta_min - self.exp_beta = exp_beta - if (exp_beta == 'upper') or exp_beta: - self.register_buffer("steps", torch.scalar_tensor(0).float()) - - def density_func(self, sdf, beta=None): - if beta is None: - beta = self.get_beta() - alpha = 1 / beta - return alpha * (0.5 + 0.5 * sdf.sign() * torch.expm1(-sdf.abs() / beta)) # TODO: need abs maybe, not sure - - def get_beta(self): - if self.exp_beta == 'upper': - beta_upper = 0.12 * torch.exp(-0.003 * (self.steps / 1e3)) - beta = min(self.beta.abs(), beta_upper) + self.beta_min - elif self.exp_beta: - if self.steps < 500000: - beta = self.beta.abs() + self.beta_min - else: - beta = self.beta.abs().detach() + self.beta_min - else: - beta = self.beta.abs() + self.beta_min - return beta - - def set_steps(self, steps): - if hasattr(self, "steps"): - self.steps = self.steps * 0 + steps - -# ------------------------------------------------------------------------------------------- # - -@persistence.persistent_class -class NeRFBlock(nn.Module): - ''' - Predicts volume density and color from 3D location, viewing - direction, and latent code z. - ''' - # dimensions - input_dim = 3 - w_dim = 512 # style latent - z_dim = 0 # input latent - rgb_out_dim = 128 - hidden_size = 128 - n_blocks = 8 - img_channels = 3 - magnitude_ema_beta = -1 - disable_latents = False - max_batch_size = 2 ** 18 - shuffle_factor = 1 - implementation = 'batch_reshape' # option: [flatten_2d, batch_reshape] - - # architecture settings - activation = 'lrelu' - use_skip = False - use_viewdirs = False - add_rgb = False - predict_rgb = False - inverse_sphere = False - merge_sigma_feat = False # use one MLP for sigma and features - no_sigma = False # do not predict sigma, only output features - - tcnn_backend = False - use_style = None - use_normal = False - use_sdf = None - volsdf_exp_beta = False - normalized_feat = False - final_sigmoid_act = False - - # positional encoding inpuut - use_pos = False - n_freq_posenc = 10 - n_freq_posenc_views = 4 - downscale_p_by = 1 - gauss_dim_pos = 20 - gauss_dim_view = 4 - gauss_std = 10. - positional_encoding = "normal" - - def __init__(self, nerf_kwargs): - super().__init__() - for key in nerf_kwargs: - if hasattr(self, key): - setattr(self, key, nerf_kwargs[key]) - - self.sdf_mode = self.use_sdf - self.use_sdf = self.use_sdf is not None - if self.use_sdf == 'volsdf': - self.density_transform = SDFDensityLaplace( - params_init={'beta': 0.1}, - beta_min=0.0001, - exp_beta=self.volsdf_exp_beta) - - # ----------- input module ------------------------- - D = self.input_dim if not self.inverse_sphere else self.input_dim + 1 - if self.positional_encoding == 'gauss': - rng = np.random.RandomState(2021) - B_pos = self.gauss_std * torch.from_numpy(rng.randn(D, self.gauss_dim_pos * D)).float() - B_view = self.gauss_std * torch.from_numpy(rng.randn(3, self.gauss_dim_view * 3)).float() - self.register_buffer("B_pos", B_pos) - self.register_buffer("B_view", B_view) - dim_embed = D * self.gauss_dim_pos * 2 - dim_embed_view = 3 * self.gauss_dim_view * 2 - elif self.positional_encoding == 'normal': - dim_embed = D * self.n_freq_posenc * 2 - dim_embed_view = 3 * self.n_freq_posenc_views * 2 - else: # not using positional encoding - dim_embed, dim_embed_view = D, 3 - - if self.use_pos: - dim_embed, dim_embed_view = dim_embed + D, dim_embed_view + 3 - - self.dim_embed = dim_embed - self.dim_embed_view = dim_embed_view - - # ------------ Layers -------------------------- - assert not (self.add_rgb and self.predict_rgb), "only one could be achieved" - assert not ((self.use_viewdirs or self.use_normal) and (self.merge_sigma_feat or self.no_sigma)), \ - "merged MLP does not support." - - if self.disable_latents: - w_dim = 0 - elif self.z_dim > 0: # if input global latents, disable using style vectors - w_dim, dim_embed, dim_embed_view = 0, dim_embed + self.z_dim, dim_embed_view + self.z_dim - else: - w_dim = self.w_dim - - final_in_dim = self.hidden_size - if self.use_normal: - final_in_dim += D - - final_out_dim = self.rgb_out_dim * self.shuffle_factor - if self.merge_sigma_feat: - final_out_dim += self.shuffle_factor # predicting sigma - if self.add_rgb: - final_out_dim += self.img_channels - - # start building the model - if self.tcnn_backend: - try: - import tinycudann as tcnn - except ImportError: - raise ImportError("This sample requires the tiny-cuda-nn extension for PyTorch.") - - assert self.merge_sigma_feat and (not self.predict_rgb) and (not self.add_rgb) - assert w_dim == 0, "do not use any modulating inputs" - - tcnn_config = {"otype": "FullyFusedMLP", "activation": "ReLU", "output_activation": "None", "n_neurons": 64, "n_hidden_layers": 1} - self.network = tcnn.Network(dim_embed, final_out_dim, tcnn_config) - self.num_ws = 0 - - else: - self.fc_in = Style2Layer(dim_embed, self.hidden_size, w_dim, activation=self.activation) - self.num_ws = 1 - self.skip_layer = self.n_blocks // 2 - 1 if self.use_skip else None - if self.n_blocks > 1: - self.blocks = nn.ModuleList([ - Style2Layer( - self.hidden_size if i != self.skip_layer else self.hidden_size + dim_embed, - self.hidden_size, - w_dim, activation=self.activation, - magnitude_ema_beta=self.magnitude_ema_beta) - for i in range(self.n_blocks - 1)]) - self.num_ws += (self.n_blocks - 1) - - if not (self.merge_sigma_feat or self.no_sigma): - self.sigma_out = ToRGBLayer(self.hidden_size, self.shuffle_factor, w_dim, kernel_size=1) - self.num_ws += 1 - self.feat_out = ToRGBLayer(final_in_dim, final_out_dim, w_dim, kernel_size=1) - if (self.z_dim == 0 and (not self.disable_latents)): - self.num_ws += 1 - else: - self.num_ws = 0 - - if self.use_viewdirs: - assert self.predict_rgb, "only works when predicting RGB" - self.from_ray = Conv2dLayer(dim_embed_view, final_out_dim, kernel_size=1, activation='linear') - - if self.predict_rgb: # predict RGB over features - self.to_rgb = Conv2dLayer(final_out_dim, self.img_channels * self.shuffle_factor, kernel_size=1, activation='linear') - - def set_steps(self, steps): - if hasattr(self, "steps"): - self.steps.fill_(steps) - - def transform_points(self, p, views=False): - p = p / self.downscale_p_by - if self.positional_encoding == 'gauss': - B = self.B_view if views else self.B_pos - p_transformed = positional_encoding(p, B, 'gauss', self.use_pos) - elif self.positional_encoding == 'normal': - L = self.n_freq_posenc_views if views else self.n_freq_posenc - p_transformed = positional_encoding(p, L, 'normal', self.use_pos) - else: - p_transformed = p - return p_transformed - - def forward(self, p_in, ray_d, z_shape=None, z_app=None, ws=None, shape=None, requires_grad=False, impl=None): - with torch.set_grad_enabled(self.training or self.use_sdf or requires_grad): - impl = 'mlp' if self.tcnn_backend else impl - option, p_in = self.forward_inputs(p_in, shape=shape, impl=impl) - if self.tcnn_backend: - with torch.cuda.amp.autocast(): - p = p_in.squeeze(-1).squeeze(-1) - o = self.network(p) - sigma_raw, feat = o[:, :self.shuffle_factor], o[:, self.shuffle_factor:] - sigma_raw = rearrange(sigma_raw, '(b s) d -> b s d', s=option[2]).to(p_in.dtype) - feat = rearrange(feat, '(b s) d -> b s d', s=option[2]).to(p_in.dtype) - else: - feat, sigma_raw = self.forward_nerf(option, p_in, ray_d, ws=ws, z_shape=z_shape, z_app=z_app) - return feat, sigma_raw - - def forward_inputs(self, p_in, shape=None, impl=None): - # prepare the inputs - impl = impl if impl is not None else self.implementation - if (shape is not None) and (impl == 'batch_reshape'): - height, width, n_steps = shape[1:] - elif impl == 'flatten_2d': - (height, width), n_steps = dividable(p_in.shape[1]), 1 - elif impl == 'mlp': - height, width, n_steps = 1, 1, p_in.shape[1] - else: - raise NotImplementedError("looking for more efficient implementation.") - p_in = rearrange(p_in, 'b (h w s) d -> (b s) d h w', h=height, w=width, s=n_steps) - use_normal = self.use_normal or self.use_sdf - if use_normal: - p_in.requires_grad_(True) - return (height, width, n_steps, use_normal), p_in - - def forward_nerf(self, option, p_in, ray_d=None, ws=None, z_shape=None, z_app=None): - height, width, n_steps, use_normal = option - - # forward nerf feature networks - p = self.transform_points(p_in.permute(0,2,3,1)) - if (self.z_dim > 0) and (not self.disable_latents): - assert (z_shape is not None) and (ws is None) - z_shape = repeat(z_shape, 'b c -> (b s) h w c', h=height, w=width, s=n_steps) - p = torch.cat([p, z_shape], -1) - p = p.permute(0,3,1,2) # BS x C x H x W - - if height == width == 1: # MLP - p = p.squeeze(-1).squeeze(-1) - - net = self.fc_in(p, ws[:, 0] if ws is not None else None) - if self.n_blocks > 1: - for idx, layer in enumerate(self.blocks): - ws_i = ws[:, idx + 1] if ws is not None else None - if (self.skip_layer is not None) and (idx == self.skip_layer): - net = torch.cat([net, p], 1) - net = layer(net, ws_i, up=1) - - # forward to get the final results - w_idx = self.n_blocks # fc_in, self.blocks - - feat_inputs = [net] - if not (self.merge_sigma_feat or self.no_sigma): - ws_i = ws[:, w_idx] if ws is not None else None - sigma_out = self.sigma_out(net, ws_i) - if use_normal: - gradients, = grad( - outputs=sigma_out, inputs=p_in, - grad_outputs=torch.ones_like(sigma_out, requires_grad=False), - retain_graph=True, create_graph=True, only_inputs=True) - feat_inputs.append(gradients) - - ws_i = ws[:, -1] if ws is not None else None - net = torch.cat(feat_inputs, 1) if len(feat_inputs) > 1 else net - feat_out = self.feat_out(net, ws_i) # this is used for lowres output - - if self.merge_sigma_feat: # split sigma from the feature - sigma_out, feat_out = feat_out[:, :self.shuffle_factor], feat_out[:, self.shuffle_factor:] - elif self.no_sigma: - sigma_out = None - - if self.predict_rgb: - if self.use_viewdirs and ray_d is not None: - ray_d = ray_d / torch.norm(ray_d, dim=-1, keepdim=True) - ray_d = self.transform_points(ray_d, views=True) - if self.z_dim > 0: - ray_d = torch.cat([ray_d, repeat(z_app, 'b c -> b (h w s) c', h=height, w=width, s=n_steps)], -1) - ray_d = rearrange(ray_d, 'b (h w s) d -> (b s) d h w', h=height, w=width, s=n_steps) - feat_ray = self.from_ray(ray_d) - rgb = self.to_rgb(F.leaky_relu(feat_out + feat_ray)) - else: - rgb = self.to_rgb(feat_out) - - if self.final_sigmoid_act: - rgb = torch.sigmoid(rgb) - if self.normalized_feat: - feat_out = feat_out / (1e-7 + feat_out.norm(dim=-1, keepdim=True)) - feat_out = torch.cat([rgb, feat_out], 1) - - # transform back - if feat_out.ndim == 2: # mlp mode - sigma_out = rearrange(sigma_out, '(b s) d -> b s d', s=n_steps) if sigma_out is not None else None - feat_out = rearrange(feat_out, '(b s) d -> b s d', s=n_steps) - else: - sigma_out = rearrange(sigma_out, '(b s) d h w -> b (h w s) d', s=n_steps) if sigma_out is not None else None - feat_out = rearrange(feat_out, '(b s) d h w -> b (h w s) d', s=n_steps) - return feat_out, sigma_out - - -@persistence.persistent_class -class CameraGenerator(torch.nn.Module): - def __init__(self, in_dim=2, hi_dim=128, out_dim=2): - super().__init__() - self.affine1 = FullyConnectedLayer(in_dim, hi_dim, activation='lrelu') - self.affine2 = FullyConnectedLayer(hi_dim, hi_dim, activation='lrelu') - self.proj = FullyConnectedLayer(hi_dim, out_dim) - - def forward(self, x): - cam = self.proj(self.affine2(self.affine1(x))) - return cam - - -@persistence.persistent_class -class CameraRay(object): - - range_u = (0, 0) - range_v = (0.25, 0.25) - range_radius = (2.732, 2.732) - depth_range = [0.5, 6.] - gaussian_camera = False - angular_camera = False - intersect_ball = False - fov = 49.13 - bg_start = 1.0 - depth_transform = None # "LogWarp" or "InverseWarp" - dists_normalized = False # use normalized interval instead of real dists - random_rotate = False - ray_align_corner = True - - nonparam_cameras = None - - def __init__(self, camera_kwargs, **other_kwargs): - if len(camera_kwargs) == 0: # for compitatbility of old checkpoints - camera_kwargs.update(other_kwargs) - for key in camera_kwargs: - if hasattr(self, key): - setattr(self, key, camera_kwargs[key]) - self.camera_matrix = get_camera_mat(fov=self.fov) - - def prepare_pixels(self, img_res, tgt_res, vol_res, camera_matrices, theta, margin=0, **unused): - if self.ray_align_corner: - all_pixels = self.get_pixel_coords(img_res, camera_matrices, theta=theta) - all_pixels = rearrange(all_pixels, 'b (h w) c -> b c h w', h=img_res, w=img_res) - tgt_pixels = F.interpolate(all_pixels, size=(tgt_res, tgt_res), mode='nearest') if tgt_res < img_res else all_pixels.clone() - vol_pixels = F.interpolate(tgt_pixels, size=(vol_res, vol_res), mode='nearest') if tgt_res > vol_res else tgt_pixels.clone() - vol_pixels = rearrange(vol_pixels, 'b c h w -> b (h w) c') - - else: # coordinates not aligned! - tgt_pixels = self.get_pixel_coords(tgt_res, camera_matrices, corner_aligned=False, theta=theta) - vol_pixels = self.get_pixel_coords(vol_res, camera_matrices, corner_aligned=False, theta=theta, margin=margin) \ - if (tgt_res > vol_res) or (margin > 0) else tgt_pixels.clone() - tgt_pixels = rearrange(tgt_pixels, 'b (h w) c -> b c h w', h=tgt_res, w=tgt_res) - return vol_pixels, tgt_pixels - - def prepare_pixels_regularization(self, tgt_pixels, n_reg_samples): - # only apply when size is bigger than voxel resolution - pace = tgt_pixels.size(-1) // n_reg_samples - idxs = torch.arange(0, tgt_pixels.size(-1), pace, device=tgt_pixels.device) # n_reg_samples - u_xy = torch.rand(tgt_pixels.size(0), 2, device=tgt_pixels.device) - u_xy = (u_xy * pace).floor().long() # batch_size x 2 - x_idxs, y_idxs = idxs[None,:] + u_xy[:,:1], idxs[None,:] + u_xy[:,1:] - rand_indexs = (x_idxs[:,None,:] + y_idxs[:,:,None] * tgt_pixels.size(-1)).reshape(tgt_pixels.size(0), -1) - tgt_pixels = rearrange(tgt_pixels, 'b c h w -> b (h w) c') - rand_pixels = tgt_pixels.gather(1, rand_indexs.unsqueeze(-1).repeat(1,1,2)) - return rand_pixels, rand_indexs - - def get_roll(self, ws, training=True, theta=None, **unused): - if (self.random_rotate is not None) and training: - theta = torch.randn(ws.size(0)).to(ws.device) * self.random_rotate / 2 - theta = theta / 180 * math.pi - else: - if theta is not None: - theta = torch.ones(ws.size(0)).to(ws.device) * theta - return theta - - def get_camera(self, batch_size, device, mode='random', fov=None, force_uniform=False): - if fov is not None: - camera_matrix = get_camera_mat(fov) - else: - camera_matrix = self.camera_matrix - camera_mat = camera_matrix.repeat(batch_size, 1, 1).to(device) - reg_loss = None # TODO: useless - - if isinstance(mode, list): - # default camera generator, we assume input mode is linear - if len(mode) == 3: - val_u, val_v, val_r = mode - r0 = self.range_radius[0] - r1 = self.range_radius[1] - else: - val_u, val_v, val_r, r_s = mode - r0 = self.range_radius[0] * r_s - r1 = self.range_radius[1] * r_s - - world_mat = get_camera_pose( - self.range_u, self.range_v, [r0, r1], - val_u, val_v, val_r, - batch_size=batch_size, - gaussian=False, # input mode is by default uniform - angular=self.angular_camera).to(device) - - elif isinstance(mode, torch.Tensor): - world_mat, mode = get_camera_pose_v2( - self.range_u, self.range_v, self.range_radius, mode, - gaussian=self.gaussian_camera and (not force_uniform), - angular=self.angular_camera) - world_mat = world_mat.to(device) - mode = torch.stack(mode, 1).to(device) - - else: - world_mat, mode = get_random_pose( - self.range_u, self.range_v, - self.range_radius, batch_size, - gaussian=self.gaussian_camera, - angular=self.angular_camera) - world_mat = world_mat.to(device) - mode = torch.stack(mode, 1).to(device) - return camera_mat.float(), world_mat.float(), mode, reg_loss - - def get_transformed_depth(self, di, reversed=False): - depth_range = self.depth_range - - if (self.depth_transform is None) or (self.depth_transform == 'None'): - g_fwd, g_inv = lambda x: x, lambda x: x - elif self.depth_transform == 'LogWarp': - g_fwd, g_inv = math.log, torch.exp - elif self.depth_transform == 'InverseWarp': - g_fwd, g_inv = lambda x: 1/x, lambda x: 1/x - else: - raise NotImplementedError - - if not reversed: - return g_inv(g_fwd(depth_range[1]) * di + g_fwd(depth_range[0]) * (1 - di)) - else: - d0 = (g_fwd(di) - g_fwd(depth_range[0])) / (g_fwd(depth_range[1]) - g_fwd(depth_range[0])) - return d0.clip(min=0, max=1) - - def get_evaluation_points(self, pixels_world=None, camera_world=None, di=None, p_i=None, no_reshape=False, transform=None): - if p_i is None: - batch_size = pixels_world.shape[0] - n_steps = di.shape[-1] - ray_i = pixels_world - camera_world - p_i = camera_world.unsqueeze(-2).contiguous() + \ - di.unsqueeze(-1).contiguous() * ray_i.unsqueeze(-2).contiguous() - ray_i = ray_i.unsqueeze(-2).repeat(1, 1, n_steps, 1) - - else: - assert no_reshape, "only used to transform points to a warped space" - - if transform is None: - transform = self.depth_transform - - if transform == 'LogWarp': - c = torch.tensor([1., 0., 0.]).to(p_i.device) - p_i = normalization_inverse_sqrt_dist_centered( - p_i, c[None, None, None, :], self.depth_range[1]) - - elif transform == 'InverseWarp': - # https://arxiv.org/pdf/2111.12077.pdf - p_n = p_i.norm(p=2, dim=-1, keepdim=True).clamp(min=1e-7) - con = p_n.ge(1).type_as(p_n) - p_i = p_i * (1 -con) + (2 - 1 / p_n) * (p_i / p_n) * con - - if no_reshape: - return p_i - - assert(p_i.shape == ray_i.shape) - p_i = p_i.reshape(batch_size, -1, 3) - ray_i = ray_i.reshape(batch_size, -1, 3) - return p_i, ray_i - - def get_evaluation_points_bg(self, pixels_world, camera_world, di): - batch_size = pixels_world.shape[0] - n_steps = di.shape[-1] - n_pixels = pixels_world.shape[1] - ray_world = pixels_world - camera_world - ray_world = ray_world / ray_world.norm(dim=-1, keepdim=True) # normalize - - camera_world = camera_world.unsqueeze(-2).expand(batch_size, n_pixels, n_steps, 3) - ray_world = ray_world.unsqueeze(-2).expand(batch_size, n_pixels, n_steps, 3) - bg_pts, _ = depth2pts_outside(camera_world, ray_world, di) # di: 1 ---> 0 - - bg_pts = bg_pts.reshape(batch_size, -1, 4) - ray_world = ray_world.reshape(batch_size, -1, 3) - return bg_pts, ray_world - - def add_noise_to_interval(self, di): - di_mid = .5 * (di[..., 1:] + di[..., :-1]) - di_high = torch.cat([di_mid, di[..., -1:]], dim=-1) - di_low = torch.cat([di[..., :1], di_mid], dim=-1) - noise = torch.rand_like(di_low) - ti = di_low + (di_high - di_low) * noise - return ti - - def calc_volume_weights(self, sigma, z_vals=None, ray_vector=None, dists=None, last_dist=1e10): - if dists is None: - dists = z_vals[..., 1:] - z_vals[..., :-1] - if ray_vector is not None: - dists = dists * torch.norm(ray_vector, dim=-1, keepdim=True) - dists = torch.cat([dists, torch.ones_like(dists[..., :1]) * last_dist], dim=-1) - alpha = 1.-torch.exp(-F.relu(sigma)*dists) - - if last_dist > 0: - alpha[..., -1] = 1 - - # alpha = 1.-torch.exp(-sigma * dists) - T = torch.cumprod(torch.cat([ - torch.ones_like(alpha[:, :, :1]), - (1. - alpha + 1e-10), ], dim=-1), dim=-1)[..., :-1] - weights = alpha * T - return weights, T[..., -1], dists - - def get_pixel_coords(self, tgt_res, camera_matrices, corner_aligned=True, margin=0, theta=None, invert_y=True): - device = camera_matrices[0].device - batch_size = camera_matrices[0].shape[0] - # margin = self.margin if margin is None else margin - full_pixels = arange_pixels((tgt_res, tgt_res), - batch_size, invert_y_axis=invert_y, margin=margin, - corner_aligned=corner_aligned).to(device) - if (theta is not None): - theta = theta.unsqueeze(-1) - x = full_pixels[..., 0] * torch.cos(theta) - full_pixels[..., 1] * torch.sin(theta) - y = full_pixels[..., 0] * torch.sin(theta) + full_pixels[..., 1] * torch.cos(theta) - full_pixels = torch.stack([x, y], -1) - return full_pixels - - def get_origin_direction(self, pixels, camera_matrices): - camera_mat, world_mat = camera_matrices[:2] - if camera_mat.size(0) < pixels.size(0): - camera_mat = repeat(camera_mat, 'b c d -> (b s) c d', s=pixels.size(0)//camera_mat.size(0)) - if world_mat.size(0) < pixels.size(0): - world_mat = repeat(world_mat, 'b c d -> (b s) c d', s=pixels.size(0)//world_mat.size(0)) - pixels_world = image_points_to_world(pixels, camera_mat=camera_mat, world_mat=world_mat) - camera_world = origin_to_world(pixels.size(1), camera_mat=camera_mat, world_mat=world_mat) - ray_vector = pixels_world - camera_world - return pixels_world, camera_world, ray_vector - - def set_camera_prior(self, dataset_cams): - self.nonparam_cameras = dataset_cams - - -@persistence.persistent_class -class VolumeRenderer(object): - - n_ray_samples = 14 - n_bg_samples = 4 - n_final_samples = None # final nerf steps after upsampling (optional) - sigma_type = 'relu' # other allowed options including, "abs", "shiftedsoftplus", "exp" - - hierarchical = True - fine_only = False - no_background = False - white_background = False - mask_background = False - pre_volume_size = None - - bound = None - density_p_target = 1.0 - tv_loss_weight = 0.0 # for now only works for density-based voxels - - def __init__(self, renderer_kwargs, camera_ray, input_encoding=None, **other_kwargs): - if len(renderer_kwargs) == 0: # for compitatbility of old checkpoints - renderer_kwargs.update(other_kwargs) - for key in renderer_kwargs: - if hasattr(self, key): - setattr(self, key, renderer_kwargs[key]) - self.C = camera_ray - self.I = input_encoding - - def split_feat(self, x, img_channels, white_color=None, split_rgb=True): - img = x[:, :img_channels] - if split_rgb: - x = x[:, img_channels:] - if (white_color is not None) and self.white_background: - img = img + white_color - return x, img - - def get_bound(self): - if self.bound is not None: - return self.bound - - # when applying normalization, the points are restricted inside R=2 ball - if self.C.depth_transform == 'InverseWarp': - bound = 2 - else: # TODO: this is a bit hacky as we assume object at origin - bound = (self.C.depth_range[1] - self.C.depth_range[0]) - return bound - - def get_density(self, sigma_raw, fg_nerf, no_noise=False, training=False): - if fg_nerf.use_sdf: - sigma = fg_nerf.density_transform.density_func(sigma_raw) - elif self.sigma_type == 'relu': - if training and (not no_noise): # adding noise to pass gradient? - sigma_raw = sigma_raw + torch.randn_like(sigma_raw) - sigma = F.relu(sigma_raw) - elif self.sigma_type == 'shiftedsoftplus': # https://arxiv.org/pdf/2111.11215.pdf - sigma = F.softplus(sigma_raw - 1) # 1 is the shifted bias. - elif self.sigma_type == 'exp_truncated': # density in the log-space - sigma = torch.exp(5 - F.relu(5 - (sigma_raw - 1))) # up-bound = 5, also shifted by 1 - else: - sigma = sigma_raw - return sigma - - def forward_hierarchical_sampling(self, di, weights, n_steps, det=False): - di_mid = 0.5 * (di[..., :-1] + di[..., 1:]) - n_bins = di_mid.size(-1) - batch_size = di.size(0) - di_fine = sample_pdf( - di_mid.reshape(-1, n_bins), - weights.reshape(-1, n_bins+1)[:, 1:-1], - n_steps, det=det).reshape(batch_size, -1, n_steps) - return di_fine - - def forward_rendering_with_pre_density(self, H, output, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles): - pixels_world, camera_world, ray_vector = nerf_input_cams - z_shape_obj, z_app_obj = latent_codes[:2] - height, width = dividable(H.n_points) - fg_shape = [H.batch_size, height, width, H.n_steps] - bound = self.get_bound() - - # sample points - di = torch.linspace(0., 1., steps=H.n_steps).to(H.device) - di = repeat(di, 's -> b n s', b=H.batch_size, n=H.n_points) - if (H.training and (not H.get('disable_noise', False))) or H.get('force_noise', False): - di = self.C.add_noise_to_interval(di) - di_trs = self.C.get_transformed_depth(di) - p_i, r_i = self.C.get_evaluation_points(pixels_world, camera_world, di_trs) - p_i = self.I.query_input_features(p_i, nerf_input_feats, fg_shape, bound) - - pre_sigma_raw, p_i = p_i[...,:self.I.sigma_dim].sum(dim=-1, keepdim=True), p_i[..., self.I.sigma_dim:] - pre_sigma = self.get_density(rearrange(pre_sigma_raw, 'b (n s) () -> b n s', s=H.n_steps), - fg_nerf, training=H.training) - - pre_weights = self.C.calc_volume_weights( - pre_sigma, di if self.C.dists_normalized else di_trs, ray_vector, last_dist=1e10)[0] - - feat, _ = fg_nerf(p_i, r_i, z_shape_obj, z_app_obj, ws=styles, shape=fg_shape) - feat = rearrange(feat, 'b (n s) d -> b n s d', s=H.n_steps) - feat = torch.sum(pre_weights.unsqueeze(-1) * feat, dim=-2) - - output.feat += [feat] - output.fg_weights = pre_weights - output.fg_depths = (di, di_trs) - return output - - def forward_sampling(self, H, output, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles): - # TODO: experimental research code. Not functional yet. - - pixels_world, camera_world, ray_vector = nerf_input_cams - z_shape_obj, z_app_obj = latent_codes[:2] - height, width = dividable(H.n_points) - bound = self.get_bound() - - # just to simulate - H.n_steps = 64 - di = torch.linspace(0., 1., steps=H.n_steps).to(H.device) - di = repeat(di, 's -> b n s', b=H.batch_size, n=H.n_points) - if (H.training and (not H.get('disable_noise', False))) or H.get('force_noise', False): - di = self.C.add_noise_to_interval(di) - di_trs = self.C.get_transformed_depth(di) - - fg_shape = [H.batch_size, height, width, 1] - - # iteration in the loop (?) - feats, sigmas = [], [] - with torch.enable_grad(): - di_trs.requires_grad_(True) - for s in range(di_trs.shape[-1]): - di_s = di_trs[..., s:s+1] - p_i, r_i = self.C.get_evaluation_points(pixels_world, camera_world, di_s) - if nerf_input_feats is not None: - p_i = self.I.query_input_features(p_i, nerf_input_feats, fg_shape, bound) - feat, sigma_raw = fg_nerf(p_i, r_i, z_shape_obj, z_app_obj, ws=styles, shape=fg_shape, requires_grad=True) - sigma = self.get_density(sigma_raw, fg_nerf, training=H.training) - feats += [feat] - sigmas += [sigma] - feat, sigma = torch.stack(feats, 2), torch.cat(sigmas, 2) - fg_weights, bg_lambda = self.C.calc_volume_weights( - sigma, di if self.C.dists_normalized else di_trs, # use real dists for computing weights - ray_vector, last_dist=0 if not H.fg_inf_depth else 1e10)[:2] - fg_feat = torch.sum(fg_weights.unsqueeze(-1) * feat, dim=-2) - - output.feat += [fg_feat] - output.full_out += [feat] - output.fg_weights = fg_weights - output.bg_lambda = bg_lambda - output.fg_depths = (di, di_trs) - return output - - def forward_rendering(self, H, output, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles): - pixels_world, camera_world, ray_vector = nerf_input_cams - z_shape_obj, z_app_obj = latent_codes[:2] - height, width = dividable(H.n_points) - fg_shape = [H.batch_size, height, width, H.n_steps] - bound = self.get_bound() - - # sample points - di = torch.linspace(0., 1., steps=H.n_steps).to(H.device) - di = repeat(di, 's -> b n s', b=H.batch_size, n=H.n_points) - if (H.training and (not H.get('disable_noise', False))) or H.get('force_noise', False): - di = self.C.add_noise_to_interval(di) - di_trs = self.C.get_transformed_depth(di) - p_i, r_i = self.C.get_evaluation_points(pixels_world, camera_world, di_trs) - - if nerf_input_feats is not None: - p_i = self.I.query_input_features(p_i, nerf_input_feats, fg_shape, bound) - - feat, sigma_raw = fg_nerf(p_i, r_i, z_shape_obj, z_app_obj, ws=styles, shape=fg_shape) - feat = rearrange(feat, 'b (n s) d -> b n s d', s=H.n_steps) - sigma_raw = rearrange(sigma_raw.squeeze(-1), 'b (n s) -> b n s', s=H.n_steps) - sigma = self.get_density(sigma_raw, fg_nerf, training=H.training) - fg_weights, bg_lambda = self.C.calc_volume_weights( - sigma, di if self.C.dists_normalized else di_trs, # use real dists for computing weights - ray_vector, last_dist=0 if not H.fg_inf_depth else 1e10)[:2] - - if self.hierarchical and (not H.get('disable_hierarchical', False)): - with torch.no_grad(): - di_fine = self.forward_hierarchical_sampling(di, fg_weights, H.n_steps, det=(not H.training)) - di_trs_fine = self.C.get_transformed_depth(di_fine) - p_f, r_f = self.C.get_evaluation_points(pixels_world, camera_world, di_trs_fine) - if nerf_input_feats is not None: - p_f = self.I.query_input_features(p_f, nerf_input_feats, fg_shape, bound) - - feat_f, sigma_raw_f = fg_nerf(p_f, r_f, z_shape_obj, z_app_obj, ws=styles, shape=fg_shape) - feat_f = rearrange(feat_f, 'b (n s) d -> b n s d', s=H.n_steps) - sigma_raw_f = rearrange(sigma_raw_f.squeeze(-1), 'b (n s) -> b n s', s=H.n_steps) - sigma_f = self.get_density(sigma_raw_f, fg_nerf, training=H.training) - - feat = torch.cat([feat_f, feat], 2) - sigma = torch.cat([sigma_f, sigma], 2) - sigma_raw = torch.cat([sigma_raw_f, sigma_raw], 2) - di = torch.cat([di_fine, di], 2) - di_trs = torch.cat([di_trs_fine, di_trs], 2) - - di, indices = torch.sort(di, dim=2) - di_trs = torch.gather(di_trs, 2, indices) - sigma = torch.gather(sigma, 2, indices) - sigma_raw = torch.gather(sigma_raw, 2, indices) - feat = torch.gather(feat, 2, repeat(indices, 'b n s -> b n s d', d=feat.size(-1))) - - fg_weights, bg_lambda = self.C.calc_volume_weights( - sigma, di if self.C.dists_normalized else di_trs, # use real dists for computing weights, - ray_vector, last_dist=0 if not H.fg_inf_depth else 1e10)[:2] - - fg_feat = torch.sum(fg_weights.unsqueeze(-1) * feat, dim=-2) - - output.feat += [fg_feat] - output.full_out += [feat] - output.fg_weights = fg_weights - output.bg_lambda = bg_lambda - output.fg_depths = (di, di_trs) - return output - - def forward_rendering_background(self, H, output, bg_nerf, nerf_input_cams, latent_codes, styles_bg): - pixels_world, camera_world, _ = nerf_input_cams - z_shape_bg, z_app_bg = latent_codes[2:] - height, width = dividable(H.n_points) - bg_shape = [H.batch_size, height, width, H.n_bg_steps] - if H.fixed_input_cams is not None: - pixels_world, camera_world, _ = H.fixed_input_cams - - # render background, use NeRF++ inverse sphere parameterization - di = torch.linspace(-1., 0., steps=H.n_bg_steps).to(H.device) - di = repeat(di, 's -> b n s', b=H.batch_size, n=H.n_points) * self.C.bg_start - if (H.training and (not H.get('disable_noise', False))) or H.get('force_noise', False): - di = self.C.add_noise_to_interval(di) - p_bg, r_bg = self.C.get_evaluation_points_bg(pixels_world, camera_world, -di) - - feat, sigma_raw = bg_nerf(p_bg, r_bg, z_shape_bg, z_app_bg, ws=styles_bg, shape=bg_shape) - feat = rearrange(feat, 'b (n s) d -> b n s d', s=H.n_bg_steps) - sigma_raw = rearrange(sigma_raw.squeeze(-1), 'b (n s) -> b n s', s=H.n_bg_steps) - sigma = self.get_density(sigma_raw, bg_nerf, training=H.training) - bg_weights = self.C.calc_volume_weights(sigma, di, None)[0] - bg_feat = torch.sum(bg_weights.unsqueeze(-1) * feat, dim=-2) - - if output.get('bg_lambda', None) is not None: - bg_feat = output.bg_lambda.unsqueeze(-1) * bg_feat - output.feat += [bg_feat] - output.full_out += [feat] - output.bg_weights = bg_weights - output.bg_depths = di - return output - - def forward_volume_rendering( - self, - nerf_modules, # (fg_nerf, bg_nerf) - camera_matrices, # camera (K, RT) - vol_pixels, - - nerf_input_feats = None, - latent_codes = None, - styles = None, - styles_bg = None, - not_render_background = False, - only_render_background = False, - - render_option = None, - return_full = False, - - alpha = 0, - **unused): - - assert (latent_codes is not None) or (styles is not None) - assert self.no_background or (nerf_input_feats is None), "input features do not support background field" - - # hyper-parameters for rendering - H = EasyDict(**unused) - output = EasyDict() - output.reg_loss = EasyDict() - output.feat = [] - output.full_out = [] - - if render_option is None: - render_option = "" - H.render_option = render_option - H.alpha = alpha - - # prepare for rendering (parameters) - fg_nerf, bg_nerf = nerf_modules - - H.training = fg_nerf.training - H.device = camera_matrices[0].device - H.batch_size = camera_matrices[0].shape[0] - H.img_channels = fg_nerf.img_channels - H.n_steps = self.n_ray_samples - H.n_bg_steps = self.n_bg_samples - if alpha == -1: - H.n_steps = 20 # just for memory safe. - if "steps" in render_option: - H.n_steps = [int(r.split(':')[1]) for r in H.render_option.split(',') if r[:5] == 'steps'][0] - - # prepare for pixels for generating images - if isinstance(vol_pixels, tuple): - vol_pixels, rand_pixels = vol_pixels - pixels = torch.cat([vol_pixels, rand_pixels], 1) - H.rnd_res = int(math.sqrt(rand_pixels.size(1))) - else: - pixels, rand_pixels, H.rnd_res = vol_pixels, None, None - H.tgt_res, H.n_points = int(math.sqrt(vol_pixels.size(1))), pixels.size(1) - nerf_input_cams = self.C.get_origin_direction(pixels, camera_matrices) - - # set up an frozen camera for background if necessary - if ('freeze_bg' in H.render_option) and (bg_nerf is not None): - pitch, yaw = 0.2 + np.pi/2, 0 - range_u, range_v = self.C.range_u, self.C.range_v - u = (yaw - range_u[0]) / (range_u[1] - range_u[0]) - v = (pitch - range_v[0]) / (range_v[1] - range_v[0]) - fixed_camera = self.C.get_camera( - batch_size=H.batch_size, mode=[u, v, 0.5], device=H.device) - H.fixed_input_cams = self.C.get_origin_direction(pixels, fixed_camera) - else: - H.fixed_input_cams = None - - H.fg_inf_depth = (self.no_background or not_render_background) and (not self.white_background) - assert(not (not_render_background and only_render_background)) - - # volume rendering options: bg_weights, bg_lambda = None, None - if (nerf_input_feats is not None) and \ - len(nerf_input_feats) == 4 and \ - nerf_input_feats[2] == 'tri_vector' and \ - self.I.sigma_dim > 0 and H.fg_inf_depth: - # volume rendering with pre-computed density similar to tensor-decomposition - output = self.forward_rendering_with_pre_density( - H, output, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles) - - else: - # standard volume rendering - if not only_render_background: - output = self.forward_rendering( - H, output, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles) - - # background rendering (NeRF++) - if (not not_render_background) and (not self.no_background): - output = self.forward_rendering_background( - H, output, bg_nerf, nerf_input_cams, latent_codes, styles_bg) - - if ('early' in render_option) and ('value' not in render_option): - return self.gen_optional_output( - H, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles, output) - - # ------------------------------------------- PREPARE FULL OUTPUT (NO 2D aggregation) -------------------------------------------- # - vol_len = vol_pixels.size(1) - feat_map = sum(output.feat) - full_x = rearrange(feat_map[:, :vol_len], 'b (h w) d -> b d h w', h=H.tgt_res) - split_rgb = fg_nerf.add_rgb or fg_nerf.predict_rgb - - full_out = self.split_feat(full_x, H.img_channels, None, split_rgb=split_rgb) - if rand_pixels is not None: # used in full supervision (debug later) - if return_full: - assert (fg_nerf.predict_rgb or fg_nerf.add_rgb) - rand_outputs = [f[:,vol_pixels.size(1):] for f in output.full_out] - full_weights = torch.cat([output.fg_weights, output.bg_weights * output.bg_lambda.unsqueeze(-1)], -1) \ - if output.get('bg_weights', None) is not None else output.fg_weights - full_weights = full_weights[:,vol_pixels.size(1):] - full_weights = rearrange(full_weights, 'b (h w) s -> b s h w', h=H.rnd_res, w=H.rnd_res) - - lh, lw = dividable(full_weights.size(1)) - full_x = rearrange(torch.cat(rand_outputs, 2), 'b (h w) (l m) d -> b d (l h) (m w)', - h=H.rnd_res, w=H.rnd_res, l=lh, m=lw) - full_x, full_img = self.split_feat(full_x, H.img_channels, split_rgb=split_rgb) - output.rand_out = (full_x, full_img, full_weights) - - else: - rand_x = rearrange(feat_map[:, vol_len:], 'b (h w) d -> b d h w', h=H.rnd_res) - output.rand_out = self.split_feat(rand_x, H.img_channels, split_rgb=split_rgb) - output.full_out = full_out - return output - - def post_process_outputs(self, outputs, freeze_nerf=False): - if freeze_nerf: - outputs = [x.detach() if isinstance(x, torch.Tensor) else x for x in outputs] - x, img = outputs[0], outputs[1] - probs = outputs[2] if len(outputs) == 3 else None - return x, img, probs - - def gen_optional_output(self, H, fg_nerf, nerf_input_cams, nerf_input_feats, latent_codes, styles, output): - _, camera_world, ray_vector = nerf_input_cams - z_shape_obj, z_app_obj = latent_codes[:2] - fg_depth_map = torch.sum(output.fg_weights * output.fg_depths[1], dim=-1, keepdim=True) - img = camera_world[:, :1] + fg_depth_map * ray_vector - img = img.permute(0,2,1).reshape(-1, 3, H.tgt_res, H.tgt_res) - - if 'input_feats' in H.render_option: - a, b = [r.split(':')[1:] for r in H.render_option.split(',') if r.startswith('input_feats')][0] - a, b = int(a), int(b) - if nerf_input_feats[0] == 'volume': - img = nerf_input_feats[1][:,a:a+3,b,:,:] - elif nerf_input_feats[0] == 'tri_plane': - img = nerf_input_feats[1][:,b,a:a+3,:,:] - elif nerf_input_feats[0] == 'hash_table': - assert self.I.hash_mode == 'grid_hash' - img = nerf_input_feats[1][:,self.I.offsets[b]:self.I.offsets[b+1], :] - siz = int(np.ceil(img.size(1)**(1/3))) - img = rearrange(img, 'b (d h w) c -> b (d c) h w', h=siz, w=siz, d=siz) - img = img[:, a:a+3] - else: - raise NotImplementedError - - if 'normal' in H.render_option.split(','): - shift_l, shift_r = img[:,:,2:,:], img[:,:,:-2,:] - shift_u, shift_d = img[:,:,:,2:], img[:,:,:,:-2] - diff_hor = normalize(shift_r - shift_l, axis=1)[0][:, :, :, 1:-1] - diff_ver = normalize(shift_u - shift_d, axis=1)[0][:, :, 1:-1, :] - normal = torch.cross(diff_hor, diff_ver, dim=1) - img = normalize(normal, axis=1)[0] - - if 'gradient' in H.render_option.split(','): - points, _ = self.C.get_evaluation_points(camera_world + ray_vector, camera_world, output.fg_depths[1]) - fg_shape = [H.batch_size, H.tgt_res, H.tgt_res, output.fg_depths[1].size(-1)] - with torch.enable_grad(): - points.requires_grad_(True) - inputs = self.I.query_input_features(points, nerf_input_feats, fg_shape, self.get_bound(), True) \ - if nerf_input_feats is not None else points - if (nerf_input_feats is not None) and len(nerf_input_feats) == 4 and nerf_input_feats[2] == 'tri_vector' and (self.I.sigma_dim > 0): - sigma_out = inputs[..., :8].sum(dim=-1, keepdim=True) - else: - _, sigma_out = fg_nerf(inputs, None, ws=styles, shape=fg_shape, z_shape=z_shape_obj, z_app=z_app_obj, requires_grad=True) - gradients, = grad( - outputs=sigma_out, inputs=points, - grad_outputs=torch.ones_like(sigma_out, requires_grad=False), - retain_graph=True, create_graph=True, only_inputs=True) - gradients = rearrange(gradients, 'b (n s) d -> b n s d', s=output.fg_depths[1].size(-1)) - avg_grads = (gradients * output.fg_weights.unsqueeze(-1)).sum(-2) - avg_grads = F.normalize(avg_grads, p=2, dim=-1) - normal = rearrange(avg_grads, 'b (h w) s -> b s h w', h=H.tgt_res, w=H.tgt_res) - img = -normal - - return {'full_out': (None, img)} - - -@persistence.persistent_class -class Upsampler(object): - - no_2d_renderer = False - no_residual_img = False - block_reses = None - shared_rgb_style = False - upsample_type = 'default' - img_channels = 3 - in_res = 32 - out_res = 512 - channel_base = 1 - channel_base_sz = None - channel_max = 512 - channel_dict = None - out_channel_dict = None - - def __init__(self, upsampler_kwargs, **other_kwargs): - # for compitatbility of old checkpoints - for key in other_kwargs: - if hasattr(self, key) and (key not in upsampler_kwargs): - upsampler_kwargs[key] = other_kwargs[key] - for key in upsampler_kwargs: - if hasattr(self, key): - setattr(self, key, upsampler_kwargs[key]) - - self.out_res_log2 = int(np.log2(self.out_res)) - - # set up upsamplers - if self.block_reses is None: - self.block_resolutions = [2 ** i for i in range(2, self.out_res_log2 + 1)] - self.block_resolutions = [b for b in self.block_resolutions if b > self.in_res] - else: - self.block_resolutions = self.block_reses - - if self.no_2d_renderer: - self.block_resolutions = [] - - def build_network(self, w_dim, input_dim, **block_kwargs): - upsamplers = [] - if len(self.block_resolutions) > 0: # nerf resolution smaller than image - channel_base = int(self.channel_base * 32768) if self.channel_base_sz is None else self.channel_base_sz - fp16_resolution = self.block_resolutions[0] * 2 # do not use fp16 for the first block - - if self.channel_dict is None: - channels_dict = {res: min(channel_base // res, self.channel_max) for res in self.block_resolutions} - else: - channels_dict = self.channel_dict - - if self.out_channel_dict is not None: - img_channels = self.out_channel_dict - else: - img_channels = {res: self.img_channels for res in self.block_resolutions} - - for ir, res in enumerate(self.block_resolutions): - res_before = self.block_resolutions[ir-1] if ir > 0 else self.in_res - in_channels = channels_dict[res_before] if ir > 0 else input_dim - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) # TRY False - is_last = (ir == (len(self.block_resolutions) - 1)) - no_upsample = (res == res_before) - block = util.construct_class_by_name( - class_name=block_kwargs.get('block_name', "training.networks.SynthesisBlock"), - in_channels=in_channels, - out_channels=out_channels, - w_dim=w_dim, - resolution=res, - img_channels=img_channels[res], - is_last=is_last, - use_fp16=use_fp16, - disable_upsample=no_upsample, - block_id=ir, - **block_kwargs) - - upsamplers += [{ - 'block': block, - 'num_ws': block.num_conv if not is_last else block.num_conv + block.num_torgb, - 'name': f'b{res}' if res_before != res else f'b{res}_l{ir}' - }] - self.num_ws = sum([u['num_ws'] for u in upsamplers]) - return upsamplers - - def forward_ws_split(self, ws, blocks): - block_ws, w_idx = [], 0 - for ir, res in enumerate(self.block_resolutions): - block = blocks[ir] - if self.shared_rgb_style: - w = ws.narrow(1, w_idx, block.num_conv) - w_img = ws.narrow(1, -block.num_torgb, block.num_torgb) # TODO: tRGB to use the same style (?) - block_ws.append(torch.cat([w, w_img], 1)) - else: - block_ws.append(ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - return block_ws - - def forward_network(self, blocks, block_ws, x, img, target_res, alpha, skip_up=False, **block_kwargs): - imgs = [] - for index_l, (res, cur_ws) in enumerate(zip(self.block_resolutions, block_ws)): - if res > target_res: - break - - block = blocks[index_l] - block_noise = block_kwargs['voxel_noise'][index_l] if "voxel_noise" in block_kwargs else None - x, img = block( - x, - img if not self.no_residual_img else None, - cur_ws, - block_noise=block_noise, - skip_up=skip_up, - **block_kwargs) - - imgs += [img] - return imgs - - -@persistence.persistent_class -class NeRFInput(Upsampler): - """ Instead of positional encoding, it learns additional features for each points. - However, it is important to normalize the input points - """ - output_mode = 'none' - input_mode = 'random' # coordinates - - architecture = 'skip' - - # only useful for triplane/volume inputs - in_res = 4 - out_res = 256 - out_dim = 32 - sigma_dim = 8 - split_size = 64 - - # only useful for hashtable inputs - hash_n_min = 16 - hash_n_max = 512 - hash_size = 16 - hash_level = 16 - hash_dim_in = 32 - hash_dim_mid = None - hash_dim_out = 2 - hash_n_layer = 4 - hash_mode = 'fast_hash' # grid_hash (like volumes) - - keep_posenc = -1 - keep_nerf_latents = False - - def build_network(self, w_dim, **block_kwargs): - # change global settings for input field. - kwargs_copy = copy.deepcopy(block_kwargs) - kwargs_copy['kernel_size'] = 3 - kwargs_copy['upsample_mode'] = 'default' - kwargs_copy['use_noise'] = True - kwargs_copy['architecture'] = self.architecture - self._flag = 0 - - assert self.input_mode == 'random', \ - "currently only support normal StyleGAN2. in the future we may work on other inputs." - - # plane-based inputs with modulated 2D convolutions - if self.output_mode == 'tri_plane_reshape': - self.img_channels, in_channels, const = 3 * self.out_dim, 0, None - elif self.output_mode == 'tri_plane_product': #TODO: sigma_dim is for density - self.img_channels, in_channels = 3 * (self.out_dim + self.sigma_dim), 0 - const = torch.nn.Parameter(0.1 * torch.randn([self.img_channels, self.out_res])) - elif self.output_mode == 'multi_planes': - self.img_channels, in_channels, const = self.out_dim * self.split_size, 0, None - kwargs_copy['architecture'] = 'orig' - - # volume-based inputs with modulated 3D convolutions - elif self.output_mode == '3d_volume': # use 3D convolution to generate - kwargs_copy['architecture'] = 'orig' - kwargs_copy['mode'] = '3d' - self.img_channels, in_channels, const = self.out_dim, 0, None - elif self.output_mode == 'ms_volume': # multi-resolution voulume, between hashtable and volumes - kwargs_copy['architecture'] = 'orig' - kwargs_copy['mode'] = '3d' - self.img_channels, in_channels, const = self.out_dim, 0, None - - # embedding-based inputs with modulated MLPs - elif self.output_mode == 'hash_table': - if self.hash_mode == 'grid_hash': - assert self.hash_size % 3 == 0, "needs to be 3D" - kwargs_copy['hash_size'], self._flag = 2 ** self.hash_size, 1 - assert self.hash_dim_out * self.hash_level == self.out_dim, "size must matched" - return self.build_modulated_embedding(w_dim, **kwargs_copy) - - elif self.output_mode == 'ms_nerf_hash': - self.hash_mode, self._flag = 'grid_hash', 2 - ms_nerf = NeRFBlock({ - 'rgb_out_dim': self.hash_dim_out * self.hash_level, # HACK - 'magnitude_ema_beta': block_kwargs['magnitude_ema_beta'], - 'no_sigma': True, 'predict_rgb': False, 'add_rgb': False, - 'n_freq_posenc': 5, - }) - self.num_ws = ms_nerf.num_ws - return [{'block': ms_nerf, 'num_ws': ms_nerf.num_ws, 'name': 'ms_nerf'}] - - else: - raise NotImplementedError - - networks = super().build_network(w_dim, in_channels, **kwargs_copy) - if const is not None: - networks.append({'block': const, 'num_ws': 0, 'name': 'const'}) - return networks - - def forward_ws_split(self, ws, blocks): - if self._flag == 1: - return ws.split(1, dim=1)[:len(blocks)-1] - elif self._flag == 0: - return super().forward_ws_split(ws, blocks) - else: - return ws # do not split - - def forward_network(self, blocks, block_ws, batch_size, **block_kwargs): - x, img, out = None, None, None - def _forward_conv_networks(x, img, blocks, block_ws): - for index_l, (res, cur_ws) in enumerate(zip(self.block_resolutions, block_ws)): - x, img = blocks[index_l](x, img, cur_ws, **block_kwargs) - return img - - def _forward_ffn_networks(x, blocks, block_ws): - #TODO: FFN is implemented as 1x1 conv for now # - h, w = dividable(x.size(0)) - x = repeat(x, 'n d -> b n d', b=batch_size) - x = rearrange(x, 'b (h w) d -> b d h w', h=h, w=w) - for index_l, cur_ws in enumerate(block_ws): - block, cur_ws = blocks[index_l], cur_ws[:, 0] - x = block(x, cur_ws) - return x - - # tri-plane outputs - if 'tri_plane' in self.output_mode: - img = _forward_conv_networks(x, img, blocks, block_ws) - if self.output_mode == 'tri_plane_reshape': - out = ('tri_plane', rearrange(img, 'b (s c) h w -> b s c h w', s=3)) - elif self.output_mode == 'tri_plane_product': - out = ('tri_plane', rearrange(img, 'b (s c) h w -> b s c h w', s=3), - 'tri_vector', repeat(rearrange(blocks[-1], '(s c) d -> s c d', s=3), 's c d -> b s c d', b=img.size(0))) - else: - raise NotImplementedError("remove support for other types of tri-plane implementation.") - - # volume/3d voxel outputs - elif self.output_mode == 'multi_planes': - img = _forward_conv_networks(x, img, blocks, block_ws) - out = ('volume', rearrange(img, 'b (s c) h w -> b s c h w', s=self.out_dim)) - elif self.output_mode == '3d_volume': - img = _forward_conv_networks(x, img, blocks, block_ws) - out = ('volume', img) - - # multi-resolution 3d volume outputs (similar to hash-table) - elif self.output_mode == 'ms_volume': - img = _forward_conv_networks(x, img, blocks, block_ws) - out = ('ms_volume', rearrange(img, 'b (l m) d h w -> b l m d h w', l=self.hash_level)) - - # hash-table outputs (need hash sample implemented #TODO# - elif self.output_mode == 'hash_table': - x, blocks = blocks[-1], blocks[:-1] - if len(blocks) > 0: - x = _forward_ffn_networks(x, blocks, block_ws) - out = ('hash_table', rearrange(x, 'b d h w -> b (h w) d')) - else: - out = ('hash_table', repeat(x, 'n d -> b n d', b=batch_size)) - elif self.output_mode == 'ms_nerf_hash': - # prepare inputs for nerf - x = torch.linspace(-1, 1, steps=self.out_res, device=block_ws.device) - x = torch.stack(torch.meshgrid(x,x,x), -1).reshape(-1, 3) - x = repeat(x, 'n s -> b n s', b=block_ws.size(0)) - x = blocks[0](x, None, ws=block_ws, shape=[block_ws.size(0), 32, 32, 32])[0] - x = rearrange(x, 'b (d h w) (l m) -> b l m d h w', l=self.hash_level, d=32, h=32, w=32) - out = ('ms_volume', x) - - else: - raise NotImplementedError - - return out - - def query_input_features(self, p_i, input_feats, p_shape, bound, grad_inputs=False): - batch_size, height, width, n_steps = p_shape - p_i = p_i / bound - - if input_feats[0] == 'tri_plane': - # TODO!! Our world space, x->depth, y->width, z->height - lh, lw = dividable(n_steps) - p_ds = rearrange(p_i, 'b (h w l m) d -> b (l h) (m w) d', - b=batch_size, h=height, w=width, l=lh, m=lw).split(1, dim=-1) - px, py, pz = p_ds[0], p_ds[1], p_ds[2] - - # project points onto three planes - p_xy = torch.cat([px, py], -1) - p_xz = torch.cat([px, pz], -1) - p_yz = torch.cat([py, pz], -1) - p_gs = torch.cat([p_xy, p_xz, p_yz], 0) - f_in = torch.cat([input_feats[1][:, i] for i in range(3)], 0) - p_f = grid_sample(f_in, p_gs) # gradient-fix bilinear interpolation - p_f = [p_f[i * batch_size: (i+1) * batch_size] for i in range(3)] - - # project points to three vectors (optional) - if len(input_feats) == 4 and input_feats[2] == 'tri_vector': - # TODO: PyTorch did not support grid_sample for 1D data. Maybe need custom code. - p_gs_vec = torch.cat([pz, py, px], 0) - f_in_vec = torch.cat([input_feats[3][:, i] for i in range(3)], 0) - p_f_vec = grid_sample(f_in_vec.unsqueeze(-1), torch.cat([torch.zeros_like(p_gs_vec), p_gs_vec], -1)) - p_f_vec = [p_f_vec[i * batch_size: (i+1) * batch_size] for i in range(3)] - - # multiply on the triplane features - p_f = [m * v for m, v in zip(p_f, p_f_vec)] - - p_f = sum(p_f) - p_f = rearrange(p_f, 'b d (l h) (m w) -> b (h w l m) d', l=lh, m=lw) - - elif input_feats[0] == 'volume': - # TODO!! Our world space, x->depth, y->width, z->height - # (width-c, height-c, depth-c), volume (B x N x D x H x W) - p_ds = rearrange(p_i, 'b (h w s) d -> b s h w d', - b=batch_size, h=height, w=width, s=n_steps).split(1, dim=-1) - px, py, pz = p_ds[0], p_ds[1], p_ds[2] - p_yzx = torch.cat([py, -pz, px], -1) - p_f = F.grid_sample(input_feats[1], p_yzx, mode='bilinear', align_corners=False) - p_f = rearrange(p_f, 'b c s h w -> b (h w s) c') - - elif input_feats[0] == 'ms_volume': - # TODO!! Multi-resolution volumes (experimental) - # for smoothness, maybe we should expand the volume? (TODO) - # print(p_i.shape) - ms_v = input_feats[1].new_zeros( - batch_size, self.hash_level, self.hash_dim_out, self.out_res+1, self.out_res+1, self.out_res+1) - ms_v[..., 1:, 1:, 1:] = input_feats[1].flip([3,4,5]) - ms_v[..., :self.out_res, :self.out_res, :self.out_res] = input_feats[1] - v_size = ms_v.size(-1) - - # multi-resolutions - b = math.exp((math.log(self.hash_n_max) - math.log(self.hash_n_min))/(self.hash_level-1)) - hash_res_ls = [round(self.hash_n_min * b ** l) for l in range(self.hash_level)] - - # prepare interpolate grids - p_ds = rearrange(p_i, 'b (h w s) d -> b s h w d', - b=batch_size, h=height, w=width, s=n_steps).split(1, dim=-1) - px, py, pz = p_ds[0], p_ds[1], p_ds[2] - p_yzx = torch.cat([py, -pz, px], -1) - p_yzx = ((p_yzx + 1) / 2).clamp(min=0, max=1) # normalize to 0~1 (just for safe) - p_yzx = torch.stack([p_yzx if n < v_size else torch.fmod(p_yzx * n, v_size) / v_size for n in hash_res_ls], 1) - p_yzx = (p_yzx * 2 - 1).view(-1, n_steps, height, width, 3) - - ms_v = ms_v.view(-1, self.hash_dim_out, v_size, v_size, v_size) # back to -1~1 - p_f = F.grid_sample(ms_v, p_yzx, mode='bilinear', align_corners=False) - p_f = rearrange(p_f, '(b l) c s h w -> b (h w s) (l c)', l=self.hash_level) - - elif input_feats[0] == 'hash_table': - # TODO:!! Experimental code trying to learn hashtable used in (maybe buggy) - # https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf - - p_xyz = ((p_i + 1) / 2).clamp(min=0, max=1) # normalize to 0~1 - p_f = hash_sample( - p_xyz, input_feats[1], self.offsets.to(p_xyz.device), - self.beta, self.hash_n_min, grad_inputs, mode=self.hash_mode) - - else: - raise NotImplementedError - - if self.keep_posenc > -1: - if self.keep_posenc > 0: - p_f = torch.cat([p_f, positional_encoding(p_i, self.keep_posenc, use_pos=True)], -1) - else: - p_f = torch.cat([p_f, p_i], -1) - - return p_f - - def build_hashtable_info(self, hash_size): - self.beta = math.exp((math.log(self.hash_n_max) - math.log(self.hash_n_min)) / (self.hash_level-1)) - self.hash_res_ls = [round(self.hash_n_min * self.beta ** l) for l in range(self.hash_level)] - offsets, offset = [], 0 - for i in range(self.hash_level): - resolution = self.hash_res_ls[i] - params_in_level = min(hash_size, (resolution + 1) ** 3) - offsets.append(offset) - offset += params_in_level - offsets.append(offset) - self.offsets = torch.from_numpy(np.array(offsets, dtype=np.int32)) - return offset - - def build_modulated_embedding(self, w_dim, hash_size, **block_kwargs): - # allocate parameters - offset = self.build_hashtable_info(hash_size) - hash_const = torch.nn.Parameter(torch.zeros( - [offset, self.hash_dim_in if self.hash_n_layer > -1 else self.hash_dim_out])) - hash_const.data.uniform_(-1e-4, 1e-4) - - hash_networks = [] - if self.hash_n_layer > -1: - input_dim = self.hash_dim_in - for l in range(self.hash_n_layer): - output_dim = self.hash_dim_mid if self.hash_dim_mid is not None else self.hash_dim_in - hash_networks.append({ - 'block': Style2Layer(input_dim, output_dim, w_dim), - 'num_ws': 1, 'name': f'hmlp{l}' - }) - input_dim = output_dim - hash_networks.append({ - 'block': ToRGBLayer(input_dim, self.hash_dim_out, w_dim, kernel_size=1), - 'num_ws': 1, 'name': 'hmlpout'}) - hash_networks.append({'block': hash_const, 'num_ws': 0, 'name': 'hash_const'}) - self.num_ws = sum([h['num_ws'] for h in hash_networks]) - return hash_networks - - -@persistence.persistent_class -class NeRFSynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - channel_base = 1, - channel_max = 1024, - - # module settings - camera_kwargs = {}, - renderer_kwargs = {}, - upsampler_kwargs = {}, - input_kwargs = {}, - foreground_kwargs = {}, - background_kwargs = {}, - - # nerf space settings - z_dim = 256, - z_dim_bg = 128, - rgb_out_dim = 256, - rgb_out_dim_bg = None, - resolution_vol = 32, - resolution_start = None, - progressive = True, - prog_nerf_only = False, - interp_steps = None, # (optional) "start_step:final_step" - - # others (regularization) - regularization = [], # nv_beta, nv_vol - predict_camera = False, - camera_condition = None, - n_reg_samples = 0, - reg_full = False, - - cam_based_sampler = False, - rectangular = None, - freeze_nerf = False, - **block_kwargs, # Other arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & (img_resolution - 1) == 0 - super().__init__() - - # dimensions - self.w_dim = w_dim - self.z_dim = z_dim - self.z_dim_bg = z_dim_bg - self.num_ws = 0 - self.rgb_out_dim = rgb_out_dim - self.rgb_out_dim_bg = rgb_out_dim_bg if rgb_out_dim_bg is not None else rgb_out_dim - - self.img_resolution = img_resolution - self.resolution_vol = resolution_vol if resolution_vol < img_resolution else img_resolution - self.resolution_start = resolution_start if resolution_start is not None else resolution_vol - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - - # number of samples - self.n_reg_samples = n_reg_samples - self.reg_full = reg_full - self.use_noise = block_kwargs.get('use_noise', False) - - # ---------------------------------- Initialize Modules ---------------------------------------- -# - # camera module - self.C = CameraRay(camera_kwargs, **block_kwargs) - - # input encoding module - if (len(input_kwargs) > 0) and (input_kwargs['output_mode'] != 'none'): # using synthezied inputs - input_kwargs['channel_base'] = input_kwargs.get('channel_base', channel_base) - input_kwargs['channel_max'] = input_kwargs.get('channel_max', channel_max) - self.I = NeRFInput(input_kwargs, **block_kwargs) - else: - self.I = None - - # volume renderer module - self.V = VolumeRenderer(renderer_kwargs, camera_ray=self.C, input_encoding=self.I, **block_kwargs) - - # upsampler module - upsampler_kwargs.update(dict( - img_channels=img_channels, - in_res=resolution_vol, - out_res=img_resolution, - channel_max=channel_max, - channel_base=channel_base)) - self.U = Upsampler(upsampler_kwargs, **block_kwargs) - - # full model resolutions - self.block_resolutions = copy.deepcopy(self.U.block_resolutions) - if self.resolution_start < self.resolution_vol: - r = self.resolution_vol - while r > self.resolution_start: - self.block_resolutions.insert(0, r) - r = r // 2 - - self.predict_camera = predict_camera - if predict_camera: # encoder side camera predictor (not very useful) - self.camera_generator = CameraGenerator() - self.camera_condition = camera_condition - if self.camera_condition is not None: # style vector modulated by the camera poses (uv) - self.camera_map = MappingNetwork(z_dim=0, c_dim=16, w_dim=self.w_dim, num_ws=None, w_avg_beta=None, num_layers=2) - - # ray level choices - self.regularization = regularization - self.margin = block_kwargs.get('margin', 0) - self.activation = block_kwargs.get('activation', 'lrelu') - self.rectangular_crop = rectangular # [384, 512] ?? - - # nerf (foregournd/background) - foreground_kwargs.update(dict( - z_dim=self.z_dim, - w_dim=w_dim, - rgb_out_dim=self.rgb_out_dim, - activation=self.activation)) - - # disable positional encoding if input encoding is given - if self.I is not None: - foreground_kwargs.update(dict( - disable_latents=(not self.I.keep_nerf_latents), - input_dim=self.I.out_dim + 3 * (2 * self.I.keep_posenc + 1) - if self.I.keep_posenc > -1 else self.I.out_dim, - positional_encoding='none')) - - self.fg_nerf = NeRFBlock(foreground_kwargs) - self.num_ws += self.fg_nerf.num_ws - - if not self.V.no_background: - background_kwargs.update(dict( - z_dim=self.z_dim_bg, w_dim=w_dim, - rgb_out_dim=self.rgb_out_dim_bg, - activation=self.activation)) - self.bg_nerf = NeRFBlock(background_kwargs) - self.num_ws += self.bg_nerf.num_ws - else: - self.bg_nerf = None - - # ---------------------------------- Build Networks ---------------------------------------- -# - # input encoding (optional) - if self.I is not None: - assert self.V.no_background, "does not support background field" - nerf_inputs = self.I.build_network(w_dim, **block_kwargs) - self.input_block_names = ['in_' + i['name'] for i in nerf_inputs] - self.num_ws += sum([i['num_ws'] for i in nerf_inputs]) - for i in nerf_inputs: - setattr(self, 'in_' + i['name'], i['block']) - - # upsampler - upsamplers = self.U.build_network(w_dim, self.fg_nerf.rgb_out_dim, **block_kwargs) - if len(upsamplers) > 0: - self.block_names = [u['name'] for u in upsamplers] - self.num_ws += sum([u['num_ws'] for u in upsamplers]) - for u in upsamplers: - setattr(self, u['name'], u['block']) - - # data-sampler - if cam_based_sampler: - self.sampler = (CameraQueriedSampler, {'camera_module': self.C}) - - # other hyperameters - self.progressive_growing = progressive - self.progressive_nerf_only = prog_nerf_only - assert not (self.progressive_growing and self.progressive_nerf_only) - if prog_nerf_only: - assert (self.n_reg_samples == 0) and (not reg_full), "does not support regularization" - - self.register_buffer("alpha", torch.scalar_tensor(-1)) - if predict_camera: - self.num_ws += 1 # additional w for camera - self.freeze_nerf = freeze_nerf - self.steps = None - self.interp_steps = [int(a) for a in interp_steps.split(':')] \ - if interp_steps is not None else None #TODO two-stage training trick (from EG3d paper, not working so far) - - def set_alpha(self, alpha): - if alpha is not None: - self.alpha.fill_(alpha) - - def set_steps(self, steps): - if hasattr(self, "steps"): - if self.steps is not None: - self.steps = self.steps * 0 + steps / 1000.0 - else: - self.steps = steps / 1000.0 - - def forward(self, ws, **block_kwargs): - block_ws, imgs, rand_imgs = [], [], [] - batch_size = block_kwargs['batch_size'] = ws.size(0) - n_levels, end_l, _, target_res = self.get_current_resolution() - - # save ws for potential usage. - block_kwargs['ws_detach'] = ws.detach() - - # cameras, background codes - if self.camera_condition is not None: - cam_cond = self.get_camera_samples(batch_size, ws, block_kwargs, gen_cond=True) - - if "camera_matrices" not in block_kwargs: - block_kwargs['camera_matrices'] = self.get_camera_samples(batch_size, ws, block_kwargs) - if (self.camera_condition is not None) and (cam_cond is None): - cam_cond = block_kwargs['camera_matrices'] - - block_kwargs['theta'] = self.C.get_roll(ws, self.training, **block_kwargs) - - # get latent codes instead of style vectors (used in GRAF & GIRAFFE) - if "latent_codes" not in block_kwargs: - block_kwargs["latent_codes"] = self.get_latent_codes(batch_size, device=ws.device) - - if (self.camera_condition is not None) and (self.camera_condition == 'full'): - cam_cond = normalize_2nd_moment(self.camera_map(None, cam_cond[1].reshape(-1, 16))) - ws = ws * cam_cond[:, None, :] - - # generate features for input points (Optional, default not use) - with torch.autograd.profiler.record_function('nerf_input_feats'): - if self.I is not None: - ws = ws.to(torch.float32) - blocks = [getattr(self, name) for name in self.input_block_names] - block_ws = self.I.forward_ws_split(ws, blocks) - nerf_input_feats = self.I.forward_network(blocks, block_ws, **block_kwargs) - ws = ws[:, self.I.num_ws:] - else: - nerf_input_feats = None - - # prepare for NeRF part - with torch.autograd.profiler.record_function('prepare_nerf_path'): - if self.progressive_nerf_only and (self.alpha > -1): - cur_resolution = int(self.resolution_start * (1 - self.alpha) + self.resolution_vol * self.alpha) - elif (end_l == 0) or len(self.block_resolutions) == 0: - cur_resolution = self.resolution_start - else: - cur_resolution = self.block_resolutions[end_l-1] - - vol_resolution = self.resolution_vol if self.resolution_vol < cur_resolution else cur_resolution - nerf_resolution = vol_resolution - if (self.interp_steps is not None) and (self.steps is not None) and (self.alpha > 0): # interpolation trick (maybe work??) - if self.steps < self.interp_steps[0]: - nerf_resolution = vol_resolution // 2 - elif self.steps < self.interp_steps[1]: - nerf_resolution = (self.steps - self.interp_steps[0]) / (self.interp_steps[1] - self.interp_steps[0]) - nerf_resolution = int(nerf_resolution * (vol_resolution / 2) + vol_resolution / 2) - - vol_pixels, tgt_pixels = self.C.prepare_pixels(self.img_resolution, cur_resolution, nerf_resolution, **block_kwargs) - if (end_l > 0) and (self.n_reg_samples > 0) and self.training: - rand_pixels, rand_indexs = self.C.prepare_pixels_regularization(tgt_pixels, self.n_reg_samples) - else: - rand_pixels, rand_indexs = None, None - - if self.fg_nerf.num_ws > 0: # use style vector instead of latent codes? - block_kwargs["styles"] = ws[:, :self.fg_nerf.num_ws] - ws = ws[:, self.fg_nerf.num_ws:] - if (self.bg_nerf is not None) and self.bg_nerf.num_ws > 0: - block_kwargs["styles_bg"] = ws[:, :self.bg_nerf.num_ws] - ws = ws[:, self.bg_nerf.num_ws:] - - # volume rendering - with torch.autograd.profiler.record_function('nerf'): - if (rand_pixels is not None) and self.training: - vol_pixels = (vol_pixels, rand_pixels) - outputs = self.V.forward_volume_rendering( - nerf_modules=(self.fg_nerf, self.bg_nerf), - vol_pixels=vol_pixels, - nerf_input_feats=nerf_input_feats, - return_full=self.reg_full, - alpha=self.alpha, - **block_kwargs) - - reg_loss = outputs.get('reg_loss', {}) - x, img, _ = self.V.post_process_outputs(outputs['full_out'], self.freeze_nerf) - if nerf_resolution < vol_resolution: - x = F.interpolate(x, vol_resolution, mode='bilinear', align_corners=False) - img = F.interpolate(img, vol_resolution, mode='bilinear', align_corners=False) - - # early output from the network (used for visualization) - if 'meshes' in block_kwargs: - from dnnlib.geometry import render_mesh - block_kwargs['voxel_noise'] = render_mesh(block_kwargs['meshes'], block_kwargs["camera_matrices"]) - - if (len(self.U.block_resolutions) == 0) or \ - (x is None) or \ - (block_kwargs.get("render_option", None) is not None and - 'early' in block_kwargs['render_option']): - if 'value' in block_kwargs['render_option']: - img = x[:,:3] - img = img / img.norm(dim=1, keepdim=True) - assert img is not None, "need to add RGB" - return img - - if 'rand_out' in outputs: - x_rand, img_rand, rand_probs = self.V.post_process_outputs(outputs['rand_out'], self.freeze_nerf) - lh, lw = dividable(rand_probs.size(1)) - rand_imgs += [img_rand] - - # append low-resolution image - if img is not None: - if self.progressive_nerf_only and (img.size(-1) < self.resolution_vol): - x = upsample(x, self.resolution_vol) - img = upsample(img, self.resolution_vol) - block_kwargs['img_nerf'] = img - - # Use 2D upsampler - if (cur_resolution > self.resolution_vol) or self.progressive_nerf_only: - imgs += [img] - if (self.camera_condition is not None) and (self.camera_condition != 'full'): - cam_cond = normalize_2nd_moment(self.camera_map(None, cam_cond[1].reshape(-1, 16))) - ws = ws * cam_cond[:, None, :] - - # 2D feature map upsampling - with torch.autograd.profiler.record_function('upsampling'): - ws = ws.to(torch.float32) - blocks = [getattr(self, name) for name in self.block_names] - block_ws = self.U.forward_ws_split(ws, blocks) - imgs += self.U.forward_network(blocks, block_ws, x, img, target_res, self.alpha, **block_kwargs) - img = imgs[-1] - if len(rand_imgs) > 0: # nerf path regularization - rand_imgs += self.U.forward_network( - blocks, block_ws, x_rand, img_rand, target_res, self.alpha, skip_up=True, **block_kwargs) - img_rand = rand_imgs[-1] - - with torch.autograd.profiler.record_function('rgb_interp'): - if (self.alpha > -1) and (not self.progressive_nerf_only) and self.progressive_growing: - if (self.alpha < 1) and (self.alpha > 0): - alpha, _ = math.modf(self.alpha * n_levels) - img_nerf = imgs[-2] - if img_nerf.size(-1) < img.size(-1): # need upsample image - img_nerf = upsample(img_nerf, 2 * img_nerf.size(-1)) - img = img_nerf * (1 - alpha) + img * alpha - if len(rand_imgs) > 0: - img_rand = rand_imgs[-2] * (1 - alpha) + img_rand * alpha - - with torch.autograd.profiler.record_function('nerf_path_reg_loss'): - if len(rand_imgs) > 0: # and self.training: # random pixel regularization?? - assert self.progressive_growing - if self.reg_full: # aggregate RGB in the end. - lh, lw = img_rand.size(2) // self.n_reg_samples, img_rand.size(3) // self.n_reg_samples - img_rand = rearrange(img_rand, 'b d (l h) (m w) -> b d (l m) h w', l=lh, m=lw) - img_rand = (img_rand * rand_probs[:, None]).sum(2) - if self.V.white_background: - img_rand = img_rand + (1 - rand_probs.sum(1, keepdim=True)) - rand_indexs = repeat(rand_indexs, 'b n -> b d n', d=img_rand.size(1)) - img_ff = rearrange(rearrange(img, 'b d l h -> b d (l h)').gather(2, rand_indexs), 'b d (l h) -> b d l h', l=self.n_reg_samples) - - def l2(img_ff, img_nf): - batch_size = img_nf.size(0) - return ((img_ff - img_nf) ** 2).sum(1).reshape(batch_size, -1).mean(-1, keepdim=True) - - reg_loss['reg_loss'] = l2(img_ff, img_rand) * 2.0 - - if len(reg_loss) > 0: - for key in reg_loss: - block_kwargs[key] = reg_loss[key] - - if self.rectangular_crop is not None: # in case rectangular - h, w = self.rectangular_crop - c = int(img.size(-1) * (1 - h / w) / 2) - mask = torch.ones_like(img) - mask[:, :, c:-c, :] = 0 - img = img.masked_fill(mask > 0, -1) - - block_kwargs['img'] = img - return block_kwargs - - def get_current_resolution(self): - n_levels = len(self.block_resolutions) - if not self.progressive_growing: - end_l = n_levels - elif (self.alpha > -1) and (not self.progressive_nerf_only): - if self.alpha == 0: - end_l = 0 - elif self.alpha == 1: - end_l = n_levels - elif self.alpha < 1: - end_l = int(math.modf(self.alpha * n_levels)[1] + 1) - else: - end_l = n_levels - target_res = self.resolution_start if end_l <= 0 else self.block_resolutions[end_l-1] - before_res = self.resolution_start if end_l <= 1 else self.block_resolutions[end_l-2] - return n_levels, end_l, before_res, target_res - - def get_latent_codes(self, batch_size=32, device="cpu", tmp=1.): - z_dim, z_dim_bg = self.z_dim, self.z_dim_bg - - def sample_z(*size): - torch.randn(*size).to(device) - return torch.randn(*size).to(device) * tmp - - z_shape_obj = sample_z(batch_size, z_dim) - z_app_obj = sample_z(batch_size, z_dim) - z_shape_bg = sample_z(batch_size, z_dim_bg) if not self.V.no_background else None - z_app_bg = sample_z(batch_size, z_dim_bg) if not self.V.no_background else None - return z_shape_obj, z_app_obj, z_shape_bg, z_app_bg - - def get_camera(self, *args, **kwargs): # for compitability - return self.C.get_camera(*args, **kwargs) - - def get_camera_samples(self, batch_size, ws, block_kwargs, gen_cond=False): - if gen_cond: # camera condition for generator (? a special variant) - if ('camera_matrices' in block_kwargs) and (not self.training): # this is for rendering - camera_matrices = self.get_camera(batch_size, device=ws.device, mode=[0.5, 0.5, 0.5]) - elif self.training and (np.random.rand() > 0.5): - camera_matrices = self.get_camera(batch_size, device=ws.device) - else: - camera_matrices = None - - elif 'camera_mode' in block_kwargs: - camera_matrices = self.get_camera(batch_size, device=ws.device, mode=block_kwargs["camera_mode"]) - - else: - if self.predict_camera: - rand_mode = ws.new_zeros(ws.size(0), 2) - if self.C.gaussian_camera: - rand_mode = rand_mode.normal_() - pred_mode = self.camera_generator(rand_mode) - else: - rand_mode = rand_mode.uniform_() - pred_mode = self.camera_generator(rand_mode - 0.5) - mode = rand_mode if self.alpha <= 0 else rand_mode + pred_mode * 0.1 - camera_matrices = self.get_camera(batch_size, device=ws.device, mode=mode) - - else: - camera_matrices = self.get_camera(batch_size, device=ws.device) - - if ('camera_RT' in block_kwargs) or ('camera_UV' in block_kwargs): - camera_matrices = list(camera_matrices) - camera_mask = torch.rand(batch_size).type_as(camera_matrices[1]).lt(self.alpha) - if 'camera_RT' in block_kwargs: - image_RT = block_kwargs['camera_RT'].reshape(-1, 4, 4) - camera_matrices[1][camera_mask] = image_RT[camera_mask] # replacing with inferred cameras - else: # sample uv instead of sampling the extrinsic matrix - image_UV = block_kwargs['camera_UV'] - image_RT = self.get_camera(batch_size, device=ws.device, mode=image_UV, force_uniform=True)[1] - camera_matrices[1][camera_mask] = image_RT[camera_mask] # replacing with inferred cameras - camera_matrices[2][camera_mask] = image_UV[camera_mask] # replacing with inferred uvs - camera_matrices = tuple(camera_matrices) - return camera_matrices - - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 1, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 0, # Use FP16 for the N highest resolutions. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - lowres_head = None, # add a low-resolution discriminator head - dual_discriminator = False, # add low-resolution (NeRF) image - - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - camera_kwargs = {}, # Arguments for Camera predictor and condition (optional, refactoring) - upsample_type = 'default', - - progressive = False, - resize_real_early = False, # Peform resizing before the training loop - enable_ema = False, # Additionally save an EMA checkpoint - - **unused - ): - super().__init__() - # setup parameters - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - self.architecture = architecture - self.lowres_head = lowres_head - - self.dual_discriminator = dual_discriminator - self.upsample_type = upsample_type - self.progressive = progressive - self.resize_real_early = resize_real_early - self.enable_ema = enable_ema - - if self.progressive: - assert self.architecture == 'skip', "not supporting other types for now." - - channel_base = int(channel_base * 32768) - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - # camera prediction module - self.camera_kwargs = EasyDict( - predict_camera=False, - predict_styles=False, - camera_type='3d', - camera_encoder=True, - camera_encoder_progressive=False, - camera_disc=True) - - ## ------ for compitibility ------- # - self.camera_kwargs.predict_camera = unused.get('predict_camera', False) - self.camera_kwargs.camera_type = '9d' if unused.get('predict_9d_camera', False) else '3d' - self.camera_kwargs.camera_disc = not unused.get('no_camera_condition', False) - self.camera_kwargs.camera_encoder = unused.get('saperate_camera', False) - - self.camera_kwargs.update(camera_kwargs) - ## ------ for compitibility ------- # - - self.c_dim = c_dim - if self.camera_kwargs.predict_camera: - if self.camera_kwargs.camera_type == '3d': - self.c_dim = out_dim = 3 # (u, v) on the sphere - elif self.camera_kwargs.camera_type == '9d': - self.c_dim, out_dim = 16, 9 - elif self.camera_kwargs.camera_type == '16d': - self.c_dim = out_dim = 16 - else: - raise NotImplementedError('Wrong camera type') - if not self.camera_kwargs.camera_disc: - self.c_dim = c_dim - self.projector = EqualConv2d(channels_dict[4], out_dim, 4, padding=0, bias=False) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if self.c_dim == 0: - cmap_dim = 0 - if self.c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=self.c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - - if self.camera_kwargs.predict_styles: - self.w_dim, self.num_ws = self.camera_kwargs.w_dim, self.camera_kwargs.num_ws - self.projector_styles = EqualConv2d(channels_dict[4], self.w_dim * self.num_ws, 4, padding=0, bias=False) - self.mapping_styles = MappingNetwork(z_dim=0, c_dim=self.w_dim * self.num_ws, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - - # main discriminator blocks - common_kwargs = dict(img_channels=self.img_channels, architecture=architecture, conv_clamp=conv_clamp) - - def build_blocks(layer_name='b', low_resolution=False): - cur_layer_idx = 0 - block_resolutions = self.block_resolutions - if low_resolution: - block_resolutions = [r for r in self.block_resolutions if r <= self.lowres_head] - for res in block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'{layer_name}{res}', block) - cur_layer_idx += block.num_layers - - build_blocks(layer_name='b') # main blocks - if self.dual_discriminator: - build_blocks(layer_name='dual', low_resolution=True) - if self.camera_kwargs.camera_encoder: - build_blocks(layer_name='c', low_resolution=(not self.camera_kwargs.camera_encoder_progressive)) - - # final output module - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - self.register_buffer("alpha", torch.scalar_tensor(-1)) - - def set_alpha(self, alpha): - if alpha is not None: - self.alpha = self.alpha * 0 + alpha - - def set_resolution(self, res): - self.curr_status = res - - def forward_blocks_progressive(self, img, mode="disc", **block_kwargs): - # mode from ['disc', 'dual_disc', 'cam_enc'] - if isinstance(img, dict): - img = img['img'] - - block_resolutions, alpha, lowres_head = self.get_block_resolutions(img) - layer_name, progressive = 'b', self.progressive - if mode == "cam_enc": - assert self.camera_kwargs.predict_camera and self.camera_kwargs.camera_encoder - layer_name = 'c' - if not self.camera_kwargs.camera_encoder_progressive: - block_resolutions, progressive = [r for r in self.block_resolutions if r <= self.lowres_head], False - img = downsample(img, self.lowres_head) - elif mode == 'dual_disc': - layer_name = 'dual' - block_resolutions, progressive = [r for r in self.block_resolutions if r <= self.lowres_head], False - - img0 = downsample(img, img.size(-1) // 2) if \ - progressive and (self.lowres_head is not None) and (self.alpha > -1) and (self.alpha < 1) and (alpha > 0) \ - else None - x = None if (not progressive) or (block_resolutions[0] == self.img_resolution) \ - else getattr(self, f'{layer_name}{block_resolutions[0]}').fromrgb(img) - - for res in block_resolutions: - block = getattr(self, f'{layer_name}{res}') - if (lowres_head == res) and (self.alpha > -1) and (self.alpha < 1) and (alpha > 0): - if progressive: - if self.architecture == 'skip': - img = img * alpha + img0 * (1 - alpha) - x = x * alpha + block.fromrgb(img0) * (1 - alpha) - x, img = block(x, img, **block_kwargs) - - output = {} - if (mode == 'cam_enc') or \ - (mode == 'disc' and self.camera_kwargs.predict_camera and (not self.camera_kwargs.camera_encoder)): - c = self.projector(x)[:,:,0,0] - if self.camera_kwargs.camera_type == '9d': - c = camera_9d_to_16d(c) - output['cam'] = c - if self.camera_kwargs.predict_styles: - w = self.projector_styles(x)[:,:,0,0] - output['styles'] = w - return output, x, img - - def get_camera_loss(self, RT=None, UV=None, c=None): - if (RT is None) or (UV is None): - return None - if self.camera_kwargs.camera_type == '3d': # UV has higher priority? - return F.mse_loss(UV, c) - else: - return F.smooth_l1_loss(RT.reshape(RT.size(0), -1), c) * 10 - - def get_styles_loss(self, WS=None, w=None): - if WS is None: - return None - return F.mse_loss(WS, w) * 0.1 - - def get_block_resolutions(self, input_img): - block_resolutions = self.block_resolutions - lowres_head = self.lowres_head - alpha = self.alpha - img_res = input_img.size(-1) - if self.progressive and (self.lowres_head is not None) and (self.alpha > -1): - if (self.alpha < 1) and (self.alpha > 0): - try: - n_levels, _, before_res, target_res = self.curr_status - alpha, index = math.modf(self.alpha * n_levels) - index = int(index) - except Exception as e: # TODO: this is a hack, better to save status as buffers. - before_res = target_res = img_res - if before_res == target_res: # no upsampling was used in generator, do not increase the discriminator - alpha = 0 - block_resolutions = [res for res in self.block_resolutions if res <= target_res] - lowres_head = before_res - elif self.alpha == 0: - block_resolutions = [res for res in self.block_resolutions if res <= lowres_head] - return block_resolutions, alpha, lowres_head - - def forward(self, inputs, c=None, aug_pipe=None, return_camera=False, **block_kwargs): - if not isinstance(inputs, dict): - inputs = {'img': inputs} - img = inputs['img'] - - # this is to handle real images - block_resolutions, alpha, _ = self.get_block_resolutions(img) - if img.size(-1) > block_resolutions[0]: - img = downsample(img, block_resolutions[0]) - if self.dual_discriminator and ('img_nerf' not in inputs): - inputs['img_nerf'] = downsample(img, self.lowres_head) - - RT = inputs['camera_matrices'][1].detach() if 'camera_matrices' in inputs else None - UV = inputs['camera_matrices'][2].detach() if 'camera_matrices' in inputs else None - WS = inputs['ws_detach'].reshape(inputs['batch_size'], -1) if 'ws_detach' in inputs else None - - no_condition = (c.size(-1) == 0) - - # forward separate camera encoder, which can also be progressive... - if self.camera_kwargs.camera_encoder: - out_camenc, _, _ = self.forward_blocks_progressive(img, mode='cam_enc', **block_kwargs) - if no_condition and ('cam' in out_camenc): - c, camera_loss = out_camenc['cam'], self.get_camera_loss(RT, UV, out_camenc['cam']) - if 'styles' in out_camenc: - w, styles_loss = out_camenc['styles'], self.get_styles_loss(WS, out_camenc['styles']) - no_condition = False - - # forward another dual discriminator only for low resolution images - if self.dual_discriminator: - _, x_nerf, img_nerf = self.forward_blocks_progressive(inputs['img_nerf'], mode='dual_disc', **block_kwargs) - - # if applied data augmentation for discriminator - if aug_pipe is not None: - img = aug_pipe(img) - - # perform main discriminator block - out_disc, x, img = self.forward_blocks_progressive(img, mode='disc', **block_kwargs) - if no_condition and ('cam' in out_disc): - c, camera_loss = out_disc['cam'], self.get_camera_loss(RT, UV, out_disc['cam']) - if 'styles' in out_disc: - w, styles_loss = out_disc['styles'], self.get_styles_loss(WS, out_disc['styles']) - no_condition = False - - # camera conditional discriminator - cmap = None - if self.c_dim > 0: - cc = c.clone().detach() - cmap = self.mapping(None, cc) - if self.camera_kwargs.predict_styles: - ww = w.clone().detach() - cmap = [cmap] + [self.mapping_styles(None, ww)] - - logits = self.b4(x, img, cmap) - if self.dual_discriminator: - logits = torch.cat([logits, self.b4(x_nerf, img_nerf, cmap)], 0) - - outputs = {'logits': logits} - if self.camera_kwargs.predict_camera and (camera_loss is not None): - outputs['camera_loss'] = camera_loss - if self.camera_kwargs.predict_styles and (styles_loss is not None): - outputs['styles_loss'] = styles_loss - if return_camera: - outputs['camera'] = c - return outputs - - -@persistence.persistent_class -class Encoder(torch.nn.Module): - def __init__(self, - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - bottleneck_factor = 2, # By default, the same as discriminator we use 4x4 features - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 1, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 0, # Use FP16 for the N highest resolutions. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping - lowres_head = None, # add a low-resolution discriminator head - block_kwargs = {}, # Arguments for DiscriminatorBlock. - model_kwargs = {}, - upsample_type = 'default', - progressive = False, - **unused - ): - super().__init__() - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, bottleneck_factor, -1)] - self.architecture = architecture - self.lowres_head = lowres_head - self.upsample_type = upsample_type - self.progressive = progressive - self.model_kwargs = model_kwargs - self.output_mode = model_kwargs.get('output_mode', 'styles') - if self.progressive: - assert self.architecture == 'skip', "not supporting other types for now." - self.predict_camera = model_kwargs.get('predict_camera', False) - - channel_base = int(channel_base * 32768) - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - common_kwargs = dict(img_channels=self.img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - - # this is an encoder - if self.output_mode in ['W', 'W+', 'None']: - self.num_ws = self.model_kwargs.get('num_ws', 0) - self.n_latents = self.num_ws if self.output_mode == 'W+' else (0 if self.output_mode == 'None' else 1) - self.w_dim = self.model_kwargs.get('w_dim', 512) - self.add_dim = self.model_kwargs.get('add_dim', 0) if not self.predict_camera else 9 - self.out_dim = self.w_dim * self.n_latents + self.add_dim - assert self.out_dim > 0, 'output dimenstion has to be larger than 0' - assert self.block_resolutions[-1] // 2 == 4, "make sure the last resolution is 4x4" - self.projector = EqualConv2d(channels_dict[4], self.out_dim, 4, padding=0, bias=False) - else: - raise NotImplementedError - self.register_buffer("alpha", torch.scalar_tensor(-1)) - - def set_alpha(self, alpha): - if alpha is not None: - self.alpha.fill_(alpha) - - def set_resolution(self, res): - self.curr_status = res - - def get_block_resolutions(self, input_img): - block_resolutions = self.block_resolutions - lowres_head = self.lowres_head - alpha = self.alpha - img_res = input_img.size(-1) - if self.progressive and (self.lowres_head is not None) and (self.alpha > -1): - if (self.alpha < 1) and (self.alpha > 0): - try: - n_levels, _, before_res, target_res = self.curr_status - alpha, index = math.modf(self.alpha * n_levels) - index = int(index) - except Exception as e: # TODO: this is a hack, better to save status as buffers. - before_res = target_res = img_res - if before_res == target_res: - # no upsampling was used in generator, do not increase the discriminator - alpha = 0 - block_resolutions = [res for res in self.block_resolutions if res <= target_res] - lowres_head = before_res - elif self.alpha == 0: - block_resolutions = [res for res in self.block_resolutions if res <= lowres_head] - return block_resolutions, alpha, lowres_head - - def forward(self, inputs, **block_kwargs): - if isinstance(inputs, dict): - img = inputs['img'] - else: - img = inputs - - block_resolutions, alpha, lowres_head = self.get_block_resolutions(img) - if img.size(-1) > block_resolutions[0]: - img = downsample(img, block_resolutions[0]) - - if self.progressive and (self.lowres_head is not None) and (self.alpha > -1) and (self.alpha < 1) and (alpha > 0): - img0 = downsample(img, img.size(-1) // 2) - - x = None if (not self.progressive) or (block_resolutions[0] == self.img_resolution) \ - else getattr(self, f'b{block_resolutions[0]}').fromrgb(img) - - for res in block_resolutions: - block = getattr(self, f'b{res}') - if (lowres_head == res) and (self.alpha > -1) and (self.alpha < 1) and (alpha > 0): - if self.architecture == 'skip': - img = img * alpha + img0 * (1 - alpha) - if self.progressive: - x = x * alpha + block.fromrgb(img0) * (1 - alpha) # combine from img0 - x, img = block(x, img, **block_kwargs) - - outputs = {} - if self.output_mode in ['W', 'W+', 'None']: - out = self.projector(x)[:,:,0,0] - if self.predict_camera: - out, out_cam_9d = out[:, 9:], out[:, :9] - outputs['camera'] = camera_9d_to_16d(out_cam_9d) - - if self.output_mode == 'W+': - out = rearrange(out, 'b (n s) -> b n s', n=self.num_ws, s=self.w_dim) - elif self.output_mode == 'W': - out = repeat(out, 'b s -> b n s', n=self.num_ws) - else: - out = None - outputs['ws'] = out - - return outputs - -# ------------------------------------------------------------------------------------------- # - -class CameraQueriedSampler(torch.utils.data.Sampler): - def __init__(self, dataset, camera_module, nearest_neighbors=400, rank=0, num_replicas=1, device='cpu', seed=0): - assert len(dataset) > 0 - - super().__init__(dataset) - self.dataset = dataset - self.dataset_cameras = None - self.seed = seed - self.rank = rank - self.device = device - self.num_replicas = num_replicas - self.C = camera_module - self.K = nearest_neighbors - self.B = 1000 - - def update_dataset_cameras(self, estimator): - import tqdm - from torch_utils.distributed_utils import gather_list_and_concat - output = torch.ones(len(self.dataset), 16).to(self.device) - with torch.no_grad(): - predicted_cameras, image_indices, bsz = [], [], 64 - item_subset = [(i * self.num_replicas + self.rank) % len(self.dataset) for i in range((len(self.dataset) - 1) // self.num_replicas + 1)] - for _, (images, _, indices) in tqdm.tqdm(enumerate(torch.utils.data.DataLoader( - dataset=copy.deepcopy(self.dataset), sampler=item_subset, batch_size=bsz)), - total=len(item_subset)//bsz+1, colour='red', desc=f'Estimating camera poses for the training set at'): - predicted_cameras += [estimator(images.to(self.device).to(torch.float32) / 127.5 - 1)] - image_indices += [indices.to(self.device).long()] - predicted_cameras = torch.cat(predicted_cameras, 0) - image_indices = torch.cat(image_indices, 0) - if self.num_replicas > 1: - predicted_cameras = gather_list_and_concat(predicted_cameras) - image_indices = gather_list_and_concat(image_indices) - output[image_indices] = predicted_cameras - self.dataset_cameras = output - - def get_knn_cameras(self): - return torch.norm( - self.dataset_cameras.unsqueeze(1) - - self.C.get_camera(self.B, self.device)[0].reshape(1,self.B,16), dim=2, p=None - ).topk(self.K, largest=False, dim=0)[1] # K x B - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = np.random.RandomState(self.seed+self.rank) - while True: - if self.dataset_cameras is None: - rand_idx = rnd.randint(order.size) - yield rand_idx - else: - knn_idxs = self.get_knn_cameras() - for i in range(self.B): - rand_idx = rnd.randint(self.K) - yield knn_idxs[rand_idx, i].item() diff --git a/spaces/falterWliame/Face_Mask_Detection/Nalini Jameela Autobiography Malayalam Pdf !NEW! Free 14.md b/spaces/falterWliame/Face_Mask_Detection/Nalini Jameela Autobiography Malayalam Pdf !NEW! Free 14.md deleted file mode 100644 index 9671d7280b8a9807b81f86b3237e0c155f16103f..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Nalini Jameela Autobiography Malayalam Pdf !NEW! Free 14.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

nalinis father was an ex-army man and a communist. he met with an untimely death when nalini was nine. she lived with her mother, an ex-spinning mill worker, and her younger sister in a small rented house in kerala, india. her family was poor and all their earnings were spent on education. her mothers parents were unable to support her and her family. nalini was educated at a convent school. after completing her secondary education, nalini started working as a help in a small restaurant.

-

in the post-liberalization era, publishing houses have been forced to think of innovative ways to monetize their business. for example, if a book on yoga or cooking sells, is it not an added advantage to publish it in the regional language as well as english? this is the mantra of our growth strategy. we are not scared of publishing in local languages. we have even published hindi translations of malayalam books, says ravi. our books on the other hand have sold over 20 million copies. this is a significant number in the regional language market in the world. by publishing in regional languages, we hope to establish a presence in markets like malayalam and bengali, says satchidanandan. we can certainly speak to and connect with readers in these markets.

-

Nalini Jameela Autobiography Malayalam Pdf Free 14


Download ✶✶✶ https://urlca.com/2uDcfb



-

there are some fundamental challenges that can be faced in publishing regional languages. one such example, in kerala, is the language of education. while the higher education is in english, the education is still in malayalam in the schools. most people who grow up in this scenario end up learning english as their second language. for a publisher, it is not easy to reach out to this section of the market. another challenge is the fact that the malayalam readership doesn’t have a taste for the good literature and like to read what’s available on the shelves. another challenge is the bookstores and their lack of initiative to expand their business. there is an opportunity to explore the local language books on the shelves as they can be converted into readers by the bookstore owners. this creates a unique opportunity for publishers like us to reach out to the readers in the region, says ravi.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Combat Master Season 1 - The Best-in-Class Multiplayer Gunfight Game on Steam.md b/spaces/fatiXbelha/sd/Combat Master Season 1 - The Best-in-Class Multiplayer Gunfight Game on Steam.md deleted file mode 100644 index 026b8399c6fc97cc5779f407d4d0502008037e56..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Combat Master Season 1 - The Best-in-Class Multiplayer Gunfight Game on Steam.md +++ /dev/null @@ -1,184 +0,0 @@ -
-

Combat Master: A Fast-Paced FPS Game for PC and Mobile

-

If you are looking for a first-person shooter game that offers fast-paced action, stunning graphics, and smooth performance on any device, then you should check out Combat Master. This game is a free-to-play FPS game that is available on PC, mobile, and Linux platforms. It features various game modes, maps, and weapons that will keep you entertained and challenged. In this article, we will tell you everything you need to know about Combat Master, including how to download it, what are the system requirements, how to play it, and why you should play it.

-

What is Combat Master?

-

Combat Master is a FPS game that was released in April 2023 by Alfa Bravo Inc, an independent game studio. The game is designed for extreme performance on low-end hardware, so you don't have to worry about lagging or crashing. It also boasts of outstanding AAA graphics that will immerse you in the realistic and dynamic environments. Combat Master is a game that focuses on quality and innovation, offering a shooter gameplay that will appeal to both casual and hardcore gamers.

-

combat master download


Download Zip https://urllie.com/2uNzhy



-

A free-to-play shooter game with AAA graphics and performance

-

One of the best things about Combat Master is that it is completely free to play. You don't have to pay anything to download or play the game. You also don't have to worry about any ads, loot boxes, or pay-to-win mechanics. The game is fair and balanced for everyone, regardless of whether they spend money or not. You can enjoy the game without any interruptions or disadvantages.

-

Another great thing about Combat Master is that it has amazing graphics and performance. The game uses a next-level engine that delivers exceptional visuals and sounds. You will be amazed by the details and effects of the weapons, characters, and environments. The game also has a lightning-speed loading time, so you can get to action in seconds. The game is optimized for low-end and top-tier devices, with various settings available for customization. You can play the game on any hardware without compromising the quality.

-

A game that offers various modes, maps, and weapons

-

Combat Master is a game that offers a lot of variety and content for its players. You can choose from different game modes, such as search and destroy, gun game, arms race, team deathmatch, free-for-all, and more. Each mode has its own objectives and rules that will test your skills and strategies. You can also play offline mode if you want to practice or have fun without internet connection.

-

The game also has various maps that cater to different play styles and preferences. You can explore 11 maps with vertical gameplay, close-quarter or ranged combat. Each map has its own layout, design, and features that will affect your tactics and movements. You can also use parkour jumps, slides, climbs, and throwing knives to gain an edge over your enemies.

-

The game also has an impressive arsenal of weapons that you can use to dominate the combat. You can choose from primary weapons and secondary weapons, such as assault rifles, SMGs, shotguns, sniper rifles, pistols, revolvers, and more. Each weapon has its own stats, such as damage, accuracy, fire rate, and recoil. You can also customize your weapons with skins, attachments, and camos. You can unlock new weapons and items by leveling up or completing missions.

-

A game that features a fair and balanced multiplayer experience

-

Combat Master is a game that features a multiplayer experience that is fair and balanced for all players. You can play online with or against other players from around the world, or create your own private matches with your friends. The game has a matchmaking system that ensures you are paired with players of similar skill level and ping. The game also has an anti-cheat system that prevents hackers and cheaters from ruining the game. You can report any suspicious or abusive players and they will be banned accordingly.

-

The game also has a ranking system that tracks your progress and performance in the game. You can earn ranks and badges based on your wins, kills, assists, and other factors. You can also view your stats and leaderboards to see how you compare with other players. The game also has a clan system that allows you to join or create your own clan with other players. You can chat, cooperate, and compete with your clan members and earn rewards and reputation.

-

How to Download Combat Master?

-

Downloading Combat Master is very easy and simple. You just need to follow these steps depending on your device:

-

combat master season 1 download
-combat master mobile fps download
-combat master online fps download
-combat master pc game download
-combat master apk download
-combat master steam download
-combat master free download
-combat master game download for android
-combat master offline mode download
-combat master mod apk download
-combat master latest version download
-combat master game download for pc
-combat master ios download
-combat master mac download
-combat master linux download
-combat master windows 10 download
-combat master update download
-combat master full game download
-combat master beta download
-combat master cracked download
-combat master game download for laptop
-combat master cheats download
-combat master hack download
-combat master trainer download
-combat master patch download
-combat master gameplay video download
-combat master wallpaper download
-combat master soundtrack download
-combat master review video download
-combat master tips and tricks download
-combat master guide pdf download
-combat master best settings download
-combat master weapons list download
-combat master maps list download
-combat master skins pack download
-combat master codes generator download
-combat master bots mod download
-combat master custom maps download
-combat master server files download
-combat master dedicated server download
-combat master lan mode download
-combat master coop mode download
-combat master zombie mode download

-

For PC users, download it from Steam

-

If you want to play Combat Master on your PC, you need to download it from Steam, the most popular digital distribution platform for PC games. Here is how you can do it:

-
    -
  1. Create a free Steam account if you don't have one already.
  2. -
  3. Download and install the Steam client on your PC.
  4. -
  5. Launch the Steam client and log in with your account.
  6. -
  7. Search for Combat Master in the Steam store or click on this link: Combat Master on Steam.
  8. -
  9. Click on the "Play Game" button to start downloading the game.
  10. -
  11. Wait for the download to finish and then launch the game from your library.
  12. -
-

For mobile users, download it from Google Play or App Store

-

If you want to play Combat Master on your mobile device, you need to download it from Google Play or App Store, depending on your device's operating system. Here is how you can do it:

-
    -
  1. Open the Google Play or App Store app on your device.
  2. -
  3. Search for Combat Master in the app store or click on these links: Combat Master on Google Play or Combat Master on App Store.
  4. -
  5. Tap on the "Install" button to start downloading the game.
  6. -
  7. Wait for the download to finish and then launch the game from your home screen.
  8. -
-

For Linux users, download it from SteamOS

-

If you want to play Combat Master on your Linux device, you need to download it from SteamOS, a Linux-based operating system developed by Valve Corporation. Here is how you can do it:

-
    -
  1. Create a free Steam account if you don't have one already.
  2. -
  3. Download and install SteamOS on your Linux device.
  4. -
  5. Launch SteamOS and log in with your account.
  6. -
  7. Search for Combat Master in the Steam store or click on this link: Combat Master on Steam.
  8. -
  9. Click on the "Play Game" button to start downloading the game.
  10. -
  11. Wait for the download to finish and then launch the game from your library.
  12. -
-

What are the System Requirements for Combat Master?

-

Combat Master is a game that can run on any device without any problems. However, if you want to have the best experience possible, you should check the system requirements before playing the game. Here are the minimum and recommended requirements for PC and mobile devices:

-

The minimum and recommended requirements for PC

- - - - - - - - - - -
MinimumRecommended
OSWindows 7/8/10 64-bitWindows 10 64-bit
CPUIntel Core i3-2100 or AMD FX-6300Intel Core i5-6600K or AMD Ryzen 5 160 0
RAM4 GB8 GB
GPUNvidia GeForce GTX 650 or AMD Radeon HD 7750Nvidia GeForce GTX 1060 or AMD Radeon RX 580
DirectXVersion 11Version 12
Storage10 GB available space15 GB available space
NetworkBroadband Internet connectionBroadband Internet connection
Sound CardDirectX compatible sound cardDirectX compatible sound card
-

The compatible devices and operating systems for mobile

-

Combat Master is compatible with most mobile devices that have Android or iOS operating systems. However, some devices may not be able to run the game smoothly or at all. Here are the minimum and recommended devices and operating systems for mobile:

- - - - - -< td>iOS 13.0 or higher - -
MinimumRecommended
Android DeviceSamsung Galaxy S6 or equivalentSamsung Galaxy S10 or equivalent
Android OSAndroid 5.0 (Lollipop) or higherAndroid 9.0 (Pie) or higher
iOS DeviceiPhone 6S or equivalentiPhone X or equivalent
iOS OSiOS 11.0 or higher
NetworkWi-Fi or cellular data connectionWi-Fi or cellular data connection
-

How to Play Combat Master?

-

Playing Combat Master is very easy and fun. You just need to learn the basic controls and interface for your device, and then you can start playing any game mode you want. Here are some tips and tricks on how to play Combat Master:

-

The basic controls and interface for PC and mobile

-

The controls and interface for PC and mobile are slightly different, but they are both intuitive and customizable. Here are the default controls and interface for each device:

-
    -
  • For PC, you can use the mouse and keyboard to control your character and interact with the game. The mouse is used to aim, shoot, and look around. The keyboard is used to move, jump, crouch, slide, reload, switch weapons, throw grenades, and use abilities. You can also use the mouse wheel to zoom in or out. The interface shows your health, ammo, weapon, score, timer, map, and chat on the screen. You can change the key bindings and the interface settings in the options menu.
  • -
  • For mobile, you can use the touchscreen to control your character and interact with the game. The left side of the screen is used to move and jump. The right side of the screen is used to aim, shoot, crouch, slide, reload, switch weapons, throw grenades, and use abilities. You can also use the gyroscope to look around by tilting your device. The interface shows your health, ammo, weapon, score, timer, map, and chat on the screen. You can change the sensitivity and the interface settings in the options menu.
  • -
-

The different game modes and objectives

-

Combat Master has different game modes that you can play online or offline. Each game mode has its own objectives and rules that you need to follow. Here are some of the game modes that you can play:

-
    -
  • Search and Destroy: This is a team-based mode where one team tries to plant a bomb at one of two sites, while the other team tries to defuse it or eliminate the attackers. Each round lasts for 2 minutes or until one team wins. Each player has only one life per round.
  • -
  • Gun Game: This is a free-for-all mode where each player starts with a pistol and tries to kill other players with it. Each kill grants a new weapon with higher damage but lower fire rate. The first player to get a kill with all 20 weapons wins.
  • -
  • Arms Race: This is a team-based mode where each team tries to get as many kills as possible with different weapons. Each kill grants a new weapon with higher damage but lower fire rate. The first team to get a kill with all 20 weapons wins.
  • -
  • Team Deathmatch: This is a team-based mode where each team tries to get as many kills as possible within a time limit or until a score limit is reached. The team with the most kills wins.
  • -
  • Free-for-All: This is a solo mode where each player tries to get as many kills as possible within a time limit or until a score limit is reached. The player with the most kills wins.
  • -
-

The tips and tricks for mastering the combat

-

Combat Master is a game that requires skill and strategy to win. You need to master the combat mechanics and tactics to dominate your enemies. Here are some tips and tricks that will help you improve your gameplay:

-
    -
  • Aim for the head: Headshots deal more damage than body shots, so always aim for the head when shooting your enemies. You can also use attachments that increase your accuracy and reduce your recoil.
  • -
  • Use cover: Cover is your best friend in combat. You can use walls, crates, barrels, cars, and other objects to hide from enemy fire and peek out when you have a clear shot. You can also use cover to reload your weapon or heal yourself.
  • -
  • Move around: Moving around makes you harder to hit and allows you to flank your enemies or escape from danger. You can use parkour moves like jumps, slides, climbs, and throwing knives to move faster and more agilely.
  • -
  • Switch weapons: Switching weapons can save your life in certain situations. For example, if you run out of ammo or encounter an enemy at close range, you can switch to your secondary weapon or melee weapon for a quick kill.
  • -
  • Use grenades: Grenades are powerful tools that can deal massive damage or create diversions. You can use frag grenades to explode your enemies or flash grenades to blind them for a few seconds. You can also use smoke grenades to create a smokescreen or decoy grenades to distract your enemies.
  • -
  • Use abilities: Abilities are special skills that you can use to gain an advantage in combat. You can use abilities like radar, shield, invisibility, speed boost, and more. Each ability has a cooldown time, so use them wisely and strategically.
  • -
-

Why Play Combat Master?

-

Combat Master is a game that you should play if you love FPS games or want to try something new and exciting. Here are some of the reasons why you should play Combat Master:

-

The benefits of playing a fast-paced FPS game

-

Playing a fast-paced FPS game like Combat Master can have many benefits for your mental and physical health. Some of the benefits are:

-
    -
  • It improves your reaction time, hand-eye coordination, and spatial awareness.
  • -
  • It enhances your cognitive skills, such as memory, attention, problem-solving, and creativity.
  • -
  • It reduces your stress, anxiety, and depression levels.
  • -
  • It boosts your mood, confidence, and self-esteem.
  • -
  • It fosters your social skills, communication, and teamwork.
  • -
-

The features that make Combat Master stand out from other FPS games

-

Combat Master is a game that has many features that make it stand out from other FPS games in the market. Some of the features are:

-
    -
  • It is free to play and has no ads or pay-to-win mechanics.
  • -
  • It has AAA graphics and performance on any device.
  • -
  • It has various game modes, maps, and weapons to choose from.
  • -
  • It has a fair and balanced multiplayer experience with anti-cheat and matchmaking systems.
  • -
  • It has a ranking, clan, and chat system that allows you to track your progress and interact with other players.
  • -
-

The feedback and reviews from other players and critics

-

Combat Master is a game that has received positive feedback and reviews from other players and critics. Here are some of the comments and ratings that the game has received:

-
"Combat Master is one of the best FPS games I have ever played. It is fast, fun, and addictive. The graphics are amazing and the gameplay is smooth. I love the variety of modes, maps, and weapons. I highly recommend this game to anyone who loves shooting games." - Player review on Steam
-
"Combat Master is a game that delivers on its promise of extreme performance on low-end hardware. It is a game that is easy to pick up but hard to master. It is a game that offers a fair and balanced multiplayer experience with no pay-to-win elements. It is a game that deserves your attention and support." - Critic review on IGN
-
"Combat Master is a game that surprises me with its quality and innovation. It is a game that has stunning graphics and sounds that immerse you in the realistic and dynamic environments. It is a game that has different game modes that cater to different play styles and preferences. It is a game that I enjoy playing every day." - Player review on Google Play
-

Conclusion

-

Combat Master is a fast-paced FPS game for PC and mobile that you should not miss. It is a free-to-play game that has AAA graphics and performance on any device. It is a game that offers various modes, maps, and weapons that will keep you entertained and challenged. It is a game that features a fair and balanced multiplayer experience with anti-cheat and matchmaking systems. It is a game that has many benefits for your mental and physical health. It is a game that has positive feedback and reviews from other players and critics. It is a game that you should download and play right now.

-

FAQs

-

Here are some of the frequently asked questions about Combat Master:

-
    -
  1. Is Combat Master safe to download?
    Yes, Combat Master is safe to download from Steam, Google Play, App Store, or SteamOS. The game does not contain any viruses, malware, or spyware. The game also does not collect any personal or sensitive information from its users.
  2. -
  3. How often does Combat Master update?
    Combat Master updates regularly with new content, features, bug fixes, and improvements. The developers are always working hard to make the game better and more enjoyable for the players. You can follow the official website or social media accounts of the game to get the latest news and updates.
  4. -
  5. Can I play Combat Master offline?
    Yes, you can play Combat Master offline if you want to practice or have fun without internet connection. You can play offline mode, which is similar to team deathmatch, but with bots instead of real players. You can also customize the difficulty and number of bots in the options menu.
  6. -
  7. How can I contact the developers of Combat Master?
    If you have any questions, suggestions, feedback, or issues about Combat Master, you can contact the developers through their official website or email address. You can also join their Discord server or follow their Twitter account to communicate with them and other players. Here are the links to their contact information:
  8. - -
  9. Can I play Combat Master with a controller?
    Yes, you can play Combat Master with a controller if you prefer. The game supports most controllers that are compatible with your device. You can also adjust the controller settings in the options menu.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Discover Azerbaijan with Google Maps 3D A Virtual Tour of the Land of Fire.md b/spaces/fatiXbelha/sd/Discover Azerbaijan with Google Maps 3D A Virtual Tour of the Land of Fire.md deleted file mode 100644 index e77f99e1cc5eb26672ccf63d90cf2c8c62484d43..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Discover Azerbaijan with Google Maps 3D A Virtual Tour of the Land of Fire.md +++ /dev/null @@ -1,129 +0,0 @@ - -

Google Maps 3D Azerbaijan: A Guide to Explore the Land of Fire

-

Azerbaijan is a country in the South Caucasus region, bordered by Russia, Iran, Armenia, Georgia, and the Caspian Sea. It is known as the Land of Fire because of its rich oil and gas resources, as well as its ancient Zoroastrian fire temples. Azerbaijan has a diverse culture, history, and landscape, ranging from modern skyscrapers to medieval fortresses, from snowy mountains to semi-desert plains, from lush forests to mud volcanoes.

-

google maps 3d azerbaijan


Downloadhttps://urllie.com/2uNAjQ



-

If you want to discover this fascinating country, but you don't have the time or money to travel there, you can use Google Maps 3D to explore it from your home. Google Maps 3D is a feature that allows you to see realistic 3D models of buildings, terrain, and landmarks on Google Maps. You can zoom in, rotate, tilt, and pan the map to get different perspectives and angles of the places you want to see.

-

Introduction

-

What is Google Maps 3D?

-

Google Maps 3D is a feature that uses satellite imagery, aerial photography, and computer-generated models to create realistic 3D representations of places on Google Maps. You can access it on your web browser or on your mobile app. Google Maps 3D is available for hundreds of cities and countries around the world, including Azerbaijan.

-

Why visit Azerbaijan?

-

Azerbaijan is a country that offers something for everyone. Whether you are interested in history, culture, nature, or architecture, you will find plenty of attractions and activities to enjoy in Azerbaijan. You can visit ancient mosques, palaces, churches, and museums; you can admire modern buildings, bridges, parks, and monuments; you can explore natural wonders like mountains, lakes, waterfalls, caves, and volcanoes; you can taste delicious cuisine, listen to traditional music, watch folk dances, and participate in festivals.

-

google earth 3d view of azerbaijan
-google maps street view azerbaijan
-google maps satellite imagery of azerbaijan
-google maps 3d buildings in baku
-google maps 3d terrain of azerbaijan
-google earth pro 3d azerbaijan
-google maps 3d directions in azerbaijan
-google maps 3d landmarks of azerbaijan
-google maps 3d offline mode for azerbaijan
-google maps 3d layer for azerbaijan
-google earth vr 3d azerbaijan
-google maps 3d navigation in azerbaijan
-google maps 3d traffic in azerbaijan
-google maps 3d distance measurement for azerbaijan
-google maps 3d elevation profile of azerbaijan
-google earth studio 3d animation of azerbaijan
-google maps 3d compass for azerbaijan
-google maps 3d night mode for azerbaijan
-google maps 3d weather in azerbaijan
-google maps 3d transit in azerbaijan
-google earth timelapse 3d azerbaijan
-google maps 3d bike routes in azerbaijan
-google maps 3d walking tours of azerbaijan
-google maps 3d historical sites of azerbaijan
-google maps 3d parks and gardens in azerbaijan
-google earth engine 3d analysis of azerbaijan
-google maps 3d hotels and restaurants in azerbaijan
-google maps 3d museums and galleries in azerbaijan
-google maps 3d airports and train stations in azerbaijan
-google maps 3d shopping and entertainment in azerbaijan
-google earth webgl 3d rendering of azerbaijan
-google maps 3d car rental and taxi services in azerbaijan
-google maps 3d hospitals and pharmacies in azerbaijan
-google maps 3d schools and universities in azerbaijan
-google maps 3d mosques and churches in azerbaijan
-google earth outreach 3d projects in azerbaijan
-google maps 3d festivals and events in azerbaijan
-google maps 3d sports and recreation in azerbaijan
-google maps 3d beaches and lakes in azerbaijan
-google earth explorer 3d stories from azerbaijan
-google maps 3d currency converter for azerbaijan
-google maps 3d visa requirements for azerbaijan
-google maps 3d travel tips for azerbaijan
-google earth education 3d resources for azerbaijan
-google maps 3d reviews and ratings for azerbaijan
-google earth create 3d models of azerbaijan
-google maps 3d share and embed options for azerbaijan
-google earth plus 3d features for azerbaijan

-

How to use Google Maps 3D to explore Azerbaijan

-

Step 1: Open Google Maps on your browser or app

-

To start your virtual tour of Azerbaijan, you need to open Google Maps on your web browser or on your mobile app. You can use any device that supports Google Maps, such as a computer, laptop, tablet, or smartphone.

-

Step 2: Search for Azerbaijan or any specific location in the country

-

You can search for Azerbaijan by typing its name in the search box or by clicking on the map. You can also search for any specific location in the country, such as a city, a town, a village, a landmark, or a natural feature. For example, you can search for Baku, Sheki, Qobustan, or the Caspian Sea.

-

Step 3: Switch to satellite view and zoom in

-

Once you have found the location you want to see, you need to switch to satellite view and zoom in. Satellite view shows you the actual images of the Earth taken by satellites, instead of the default map view that shows you the roads and labels. You can switch to satellite view by clicking on the button on the bottom left corner of the map or by tapping on the layers icon on the top right corner of the app. You can zoom in by using the + and - buttons on the bottom right corner of the map or by pinching and spreading your fingers on the app.

-

Step 4: Click on the 3D button and tilt the map

-

Now comes the fun part. To see the 3D models of the buildings, terrain, and landmarks, you need to click on the 3D button and tilt the map. The 3D button is located on the bottom right corner of the map or on the top right corner of the app. It looks like a cube with an arrow pointing up. When you click on it, you will see a slider that allows you to tilt the map from 0 to 90 degrees. You can also use your mouse or your fingers to drag and rotate the map.

-

Step 5: Enjoy the stunning views of Azerbaijan's landscapes, cities, and monuments

-

Congratulations! You have successfully activated Google Maps 3D and you can now enjoy the stunning views of Azerbaijan's landscapes, cities, and monuments. You can explore different places by moving around the map, zooming in and out, tilting and rotating the map, and clicking on any point of interest to get more information. You can also take screenshots or share your views with your friends and family.

-

Some of the best places to see in Google Maps 3D Azerbaijan

-

Baku: The capital and the largest city of Azerbaijan

-

Baku is a city that combines ancient history with modern development. It is located on the shores of the Caspian Sea and it is home to more than two million people. Baku has many attractions and landmarks that you can see in Google Maps 3D, such as:

-

The Flame Towers: A trio of skyscrapers that resemble flames

-

The Flame Towers are one of the most iconic symbols of Baku. They are three skyscrapers that have a curved shape and are covered with LED screens that display images of fire at night. They are located on a hill overlooking the city and they offer a spectacular view of Baku Bay. The Flame Towers house offices, hotels, apartments, and a shopping mall.

-

The Old City: A UNESCO World Heritage Site with ancient walls and buildings

-

The Old City is the historical core of Baku. It is surrounded by a wall that dates back to the 12th century and it contains many old buildings and monuments that reflect Baku's rich cultural heritage. Some of the highlights of the Old City are:

-
    -
  • The Maiden Tower: A mysterious tower that is believed to be more than 1,000 years old. It has eight floors and a spiral staircase inside. It was used as a defensive structure, a watchtower, a lighthouse, and a museum.
  • -
  • The Palace of Shirvanshahs: A complex of palaces, mosques, mausoleums, baths, and gardens that was built by the rulers of Shirvan, a medieval state that existed in Azerbaijan. It is considered one of the finest examples of Islamic architecture in the region.
  • -
  • The Juma Mosque: A mosque that was built in the 12th century and rebuilt several times. It has a rectangular shape and a large dome. It is one of the oldest mosques in Baku and it can accommodate up to 5,000 worshippers.
  • -
-

The Heydar Aliyev Center: A futuristic cultural complex designed by Zaha Hadid

-

The Heydar Aliyev Center is a stunning building that was designed by the famous architect Zaha Hadid. It has a curved and fluid shape that resembles a wave or a flower. It is made of glass, steel, and concrete and it covers an area of 57,500 square meters. The Heydar Aliyev Center hosts exhibitions, concerts, conferences, and other events. It also houses a museum, a library, and an auditorium.

-

Sheki: A historical city in the northwest of Azerbaijan

-

Sheki is a city that dates back to more than 2,500 years ago. It is located in the foothills of the Greater Caucasus Mountains and it is famous for its natural beauty, cultural heritage, and silk production. Sheki has many attractions and landmarks that you can see in Google Maps 3D, such as:

-

The Khan's Palace: A masterpiece of Islamic architecture with colorful mosaics and stained glass

-

The Khan's Palace is a palace that was built by the Khan of Sheki in the 18th century. It is a two-story building that has six rooms and a large hall. The palace is decorated with exquisite mosaics, paintings, and stained glass windows that depict scenes from nature and mythology. The palace is surrounded by a garden and a fountain.

-

The Caravanserai: A medieval inn for travelers and merchants

-

The Caravanserai is a building that was used as an inn for travelers and merchants who came to Sheki on the Silk Road. It was built in the 18th century and it has two floors and 300 rooms. The Caravanserai has a courtyard with a pool and a fountain. It also has a mosque, a bathhouse, and a stable. Today, the Caravanserai is used as a hotel and a restaurant.

-

The Albanian Church: A 12th-century church with a unique octagonal shape

-

The Albanian Church is a church that belongs to the ancient Christian community of Albania, which was a kingdom that existed in Azerbaijan from the 4th to the 9th century. The church was built in the 12th century and it has an octagonal shape with eight domes. The church is made of stone and brick and it has frescoes and inscriptions inside. The church is also known as the Church of Kish or the Mother of All Churches.

-

Qobustan: A national park with prehistoric rock art and mud volcanoes

-

Qobustan is a national park that covers an area of 537 square kilometers in the southeast of Azerbaijan. It is famous for its prehistoric rock art and its mud volcanoes. Qobustan has many attractions and landmarks that you can see in Google Maps 3D, such as:

-

The Petroglyphs: More than 6,000 rock engravings dating back to the Stone Age

-

The Petroglyphs are rock engravings that depict animals, humans, plants, symbols, and scenes from daily life. They were made by the ancient people who lived in Qobustan from the Paleolithic to the Middle Ages. They are considered to be one of the oldest and most important examples of rock art in the world. They are also listed as a UNESCO World Heritage Site.

-

The Gaval Dash: A musical stone that produces sounds when hit with smaller stones

-

The Gaval Dash is a large flat stone that has a hollow surface. When it is hit with smaller stones, it produces sounds that resemble musical notes. The Gaval Dash is believed to be an ancient musical instrument that was used by the people who lived in Qobustan thousands of years ago. It is also known as the Tambourine Stone or the Singing Stone.

-

The Mud Volcanoes: Small cones that erupt with mud and gas

-

The Mud Volcanoes are small cones that are formed by the pressure of gas and water under the ground. They erupt with mud and gas that create bubbles and fountains on the surface. The Mud Volcanoes are one of the most unique natural phenomena in Qobustan and they attract many visitors every year . They are also home to many rare plants and animals.

-

Conclusion

-

Google Maps 3D is a great way to explore Azerbaijan, a country that has a lot to offer to its visitors. You can see the amazing 3D models of its landscapes, cities, and monuments, and learn more about its culture, history, and nature. You can also have fun and be creative with the different views and angles that Google Maps 3D provides. Google Maps 3D is a tool that can inspire you to travel to Azerbaijan in real life, or to appreciate its beauty from afar.

-

FAQs

-
    -
  1. How accurate is Google Maps 3D?
  2. -

    Google Maps 3D is based on satellite imagery, aerial photography, and computer-generated models, which are constantly updated and improved. However, Google Maps 3D may not reflect the current situation of some places, especially those that are undergoing rapid changes or development. Google Maps 3D may also have some errors or glitches, such as missing or distorted features. Therefore, Google Maps 3D should not be used as a source of authoritative information, but rather as a way of exploration and entertainment.

    -
  3. How can I see the street view of Azerbaijan on Google Maps?
  4. -

    Google Maps also offers a street view feature, which allows you to see the ground-level images of some places on Google Maps. You can access it by dragging the yellow pegman icon on the bottom right corner of the map or by tapping on the street view icon on the top left corner of the app. You can then move around the map by clicking or tapping on the arrows or by using your mouse or fingers. However, not all places in Azerbaijan have street view coverage, so you may not be able to see some areas in detail.

    -
  5. What are some other features of Google Maps that I can use to explore Azerbaijan?
  6. -

    Google Maps has many other features that you can use to enhance your experience of exploring Azerbaijan. Some of them are:

    -
      -
    • The directions feature, which allows you to get the best route and mode of transportation to get from one place to another.
    • -
    • The traffic feature, which allows you to see the real-time traffic conditions and delays on the roads.
    • -
    • The transit feature, which allows you to see the public transportation options and schedules for your destination.
    • -
    • The nearby feature, which allows you to find the closest restaurants, hotels, shops, attractions, and other places of interest.
    • -
    • The photos feature, which allows you to see the images of different places taken by other users or by Google.
    • -
    -
  7. What are some other countries that I can see in Google Maps 3D?
  8. -

    Google Maps 3D is available for hundreds of countries around the world, covering all continents and regions. Some of them are:

    -
      -
    • The United States, Canada, Mexico, Brazil, Argentina, Chile, Colombia, Peru, and other countries in North and South America.
    • -
    • The United Kingdom, France, Germany, Italy, Spain, Portugal, Greece, Turkey, Russia, Poland, Sweden, Norway, Finland, Denmark, and other countries in Europe.
    • -
    • China, Japan, India, South Korea, Thailand, Vietnam, Indonesia, Malaysia, Singapore, and other countries in Asia.
    • -
    • Australia, New Zealand, Fiji, Papua New Guinea, and other countries in Oceania.
    • -
    • Egypt, Morocco, South Africa, Kenya, Tanzania, Nigeria, Ghana, Ethiopia, and other countries in Africa.
    • -
    -
  9. How can I give feedback or report a problem on Google Maps 3D?
  10. -

    If you have any feedback or suggestions on how to improve Google Maps 3D, or if you encounter any problem or error on Google Maps 3D, you can contact Google by using the feedback or report a problem feature. You can access it by clicking on the menu icon on the top left corner of the map or by tapping on the menu icon on the top right corner of the app. You can then select the feedback or report a problem option and follow the instructions. You can also attach a screenshot or a description of the issue. Google appreciates your feedback and will try to fix any problem as soon as possible.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Video TikTok Without Logo - Free Online TikTok Downloader Tool.md b/spaces/fatiXbelha/sd/Download Video TikTok Without Logo - Free Online TikTok Downloader Tool.md deleted file mode 100644 index 0cb4d553404dcf14337973de8182c9eba47524d1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Video TikTok Without Logo - Free Online TikTok Downloader Tool.md +++ /dev/null @@ -1,86 +0,0 @@ - -

    How to Download TikTok Status Video

    -

    TikTok is a popular social media platform that allows users to create and share short videos with various effects, filters, music, and stickers. TikTok status video is a type of video that users post on their profiles or stories to express their mood, feelings, thoughts, or opinions. TikTok status video can be funny, romantic, sad, motivational, inspirational, or anything else that suits the user's personality and style.

    -

    download tiktok status video


    DOWNLOAD 🔗 https://urllie.com/2uNFeO



    -

    Downloading TikTok status video can be useful for many reasons. You may want to save your favorite videos for offline viewing, share them with your friends on other platforms, edit them with other tools, or use them as your own status video. However, downloading TikTok status video is not as easy as it seems. Depending on whether you want to download the video with or without the watermark, you may need to use different methods and tools. In this article, we will show you how to download TikTok status video with watermark and without watermark using various online tools and mobile apps.

    -

    How to Download TikTok Status Video with Watermark

    -

    If you don't mind having the TikTok logo and the creator's name on your downloaded video, you can download TikTok status video with watermark using the following methods.

    -

    Using the TikTok app

    -

    This is the easiest way to download TikTok status video with watermark, as you can do it directly from the app on your mobile device. Here are the steps to follow:

    -
      -
    1. Open the TikTok app and find the status video you want to download.
    2. -
    3. Tap the arrow icon in the lower right corner of the screen, below the comments and likes icon.
    4. -
    5. Press "Save video" or the download icon. The video will be saved to your device's gallery or camera roll.
    6. -
    7. Select "Done" or share your downloaded video to another platform if you want.
    8. -
    -

    Note that this method only works if the creator has marked the video as public. If the video is private, you won't see the save option.

    -

    Using a web browser

    -

    If you want to download TikTok status video with watermark from your computer or laptop, you can use a web browser to access the TikTok website. Here are the steps to follow:

    -
      -
    1. Go to https://www.tiktok.com and find the status video you want to download.
    2. -
    3. Click on the three dots icon in the lower right corner of the video player and select "Copy link".
    4. -
    5. Paste the link into a new tab and press enter. The video will start playing in your browser.
    6. -
    7. Right-click on the video and select "Save video as". Choose a location and a name for your downloaded file.
    8. -
    -

    How to Download TikTok Status Video without Watermark

    -

    If you want to download TikTok status video without watermark, you will need to use some third-party tools that can remove the watermark from the video. There are many online tools and mobile apps that can help you do this. Here are some of the best ones:

    -

    Using online tools

    -

    Online tools are convenient as they don't require any installation or registration. You just need to paste the link of the TikTok status video and download it without watermark. Here are some of the best online tools for downloading TikTok status video without watermark:

    -

    SnapTik

    -

    SnapTik is one filters, stickers, etc., including your downloaded TikTok status video. -

  11. YouTube: A popular and powerful video-sharing platform that can help you upload and watch videos of various categories, genres, and topics, including your downloaded TikTok status video.
  12. - -

    Q: How can I make my own TikTok status video?

    -

    A: You can make your own TikTok status video using the TikTok app itself or other tools and apps that can help you create and edit short videos. Here are some tips to make your own TikTok status video:

    -

    How to download tiktok status video without watermark
    -Tiktok status video downloader app for android
    -Best tiktok status video download website
    -Download tiktok status video with music
    -Tiktok status video download online free
    -Save tiktok status video to gallery
    -Tiktok status video download mp4 hd
    -Tiktok status video download kaise kare
    -Tiktok status video download for whatsapp
    -Tiktok status video download love
    -Tiktok status video download sad
    -Tiktok status video download funny
    -Tiktok status video download attitude
    -Tiktok status video download motivational
    -Tiktok status video download romantic
    -Tiktok status video download punjabi
    -Tiktok status video download hindi
    -Tiktok status video download tamil
    -Tiktok status video download telugu
    -Tiktok status video download malayalam
    -Tiktok status video download marathi
    -Tiktok status video download gujarati
    -Tiktok status video download bhojpuri
    -Tiktok status video download kannada
    -Tiktok status video download urdu
    -Tiktok status video download english
    -Tiktok status video download birthday
    -Tiktok status video download friendship
    -Tiktok status video download good morning
    -Tiktok status video download good night
    -Tiktok status video download happy new year
    -Tiktok status video download merry christmas
    -Tiktok status video download valentine's day
    -Tiktok status video download holi
    -Tiktok status video download diwali
    -Tiktok status video download eid
    -Tiktok status video download navratri
    -Tiktok status video download ganesh chaturthi
    -Tiktok status video download raksha bandhan
    -Tiktok status video download independence day
    -Tiktok status video download republic day
    -Download tiktok videos without watermark online free 2023

    -
      -
    • Choose a theme or topic for your status video, such as mood, feelings, thoughts, opinions, etc.
    • -
    • Choose a song or sound that matches your theme or topic, or record your own voice-over.
    • -
    • Choose a filter or effect that suits your style and personality, or use the original camera mode.
    • -
    • Record your video using the TikTok app or another tool or app. You can use the timer, speed, beauty, and other features to enhance your video.
    • -
    • Edit your video using the TikTok app or another tool or app. You can trim, crop, rotate, merge, split, add stickers, text, and more to your video.
    • -
    • Save and share your video on TikTok or other platforms. You can also download your video for offline use.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy 8 Ball Pool Offline and Online on PC with NoxPlayer Emulator.md b/spaces/fatiXbelha/sd/Enjoy 8 Ball Pool Offline and Online on PC with NoxPlayer Emulator.md deleted file mode 100644 index 48cd66f2dbb14177bf6069218a183ef2536e21c8..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy 8 Ball Pool Offline and Online on PC with NoxPlayer Emulator.md +++ /dev/null @@ -1,121 +0,0 @@ - -

    How to Download 8 Ball Pool PC Offline

    -

    Do you love playing pool games on your mobile device but wish you could enjoy them on a bigger screen? Do you want to play your favorite pool game without worrying about internet connection or data usage? If you answered yes to any of these questions, then you should try downloading 8 Ball Pool PC offline.

    -

    download 8 ball pool pc offline


    Download Zip ::: https://urllie.com/2uNGel



    -

    8 Ball Pool is one of the most popular and addictive pool games in the world. It is developed by Miniclip and has millions of fans across the globe. You can play online with your friends or against other players from around the world. You can also customize your cue and table, participate in tournaments, and win exclusive items.

    -

    But what if you don't have access to the internet or you want to save your data? Don't worry, you can still play 8 Ball Pool on your PC without any internet connection. All you need is an emulator that can run Android apps on your computer. In this article, we will show you how to download 8 Ball Pool for PC using an emulator and how to play it offline on your computer. We will also tell you about the features, tips, and pros and cons of playing 8 Ball Pool PC offline.

    -

    How to Download 8 Ball Pool for PC Using an Emulator

    -

    An emulator is a software that can mimic the functionality of another device or platform. For example, an Android emulator can let you run Android apps on your PC. There are many emulators available for free online, but we recommend using BlueStacks. BlueStacks is one of the most popular and reliable emulators that can run Android apps smoothly on your PC.

    -

    To download 8 Ball Pool for PC using BlueStacks, follow these steps:

    -
      -
    1. Go to [BlueStacks website](^4^) and download the latest version of the emulator.
    2. -
    3. Install BlueStacks on your PC by following the instructions on the screen.
    4. -
    5. Launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
    6. -
    7. Go to Google Play Store on BlueStacks and search for "8 Ball Pool".
    8. -
    9. Click on the install button and wait for the app to download.
    10. -
    11. Once the app is installed, you can find it on the home screen or in the app drawer of BlueStacks.
    12. -
    -

    Congratulations, you have successfully downloaded 8 Ball Pool for PC using BlueStacks. Now you can play it offline on your computer.

    -

    download 8 ball pool pc offline windows 10
    -download 8 ball pool pc offline free full version
    -download 8 ball pool pc offline mod apk
    -download 8 ball pool pc offline setup
    -download 8 ball pool pc offline no internet
    -download 8 ball pool pc offline bluestacks
    -download 8 ball pool pc offline emulator
    -download 8 ball pool pc offline latest version
    -download 8 ball pool pc offline multiplayer
    -download 8 ball pool pc offline crack
    -download 8 ball pool pc offline game for windows 7
    -download 8 ball pool pc offline unlimited coins
    -download 8 ball pool pc offline without emulator
    -download 8 ball pool pc offline highly compressed
    -download 8 ball pool pc offline hack
    -download 8 ball pool pc offline video chat
    -download 8 ball pool pc offline mumu player
    -download 8 ball pool pc offline ldplayer
    -download 8 ball pool pc offline miniclip
    -download 8 ball pool pc offline installer
    -download 8 ball pool pc offline android
    -download 8 ball pool pc offline exe file
    -download 8 ball pool pc offline cheat engine
    -download 8 ball pool pc offline with friends
    -download 8 ball pool pc offline apk pure
    -download 8 ball pool pc offline best emulator
    -download 8 ball pool pc offline generator tool
    -download 8 ball pool pc offline mac os
    -download 8 ball pool pc offline real-time live games
    -download 8 ball pool pc offline advanced physics engine
    -download 8 ball pool pc offline custom cues and tables
    -download 8 ball pool pc offline facebook login
    -download 8 ball pool pc offline world tournament mode
    -download 8 ball pool pc offline voice chat feature
    -download 8 ball pool pc offline private match room
    -download 8 ball pool pc offline easy controls and interface
    -download 8 ball pool pc offline realistic graphics and sounds effects
    -download 8 ball pool pc offline fun and engaging gameplay
    -download 8 ball pool pc offline tips and tricks guide
    -download 8 ball pool pc offline ranking and leaderboard system

    -

    How to Play 8 Ball Pool Offline on PC

    -

    To play 8 Ball Pool offline on PC, you need to make sure that you have launched the app at least once with an internet connection. This is because the app needs to verify your account and sync your data with the server. Once you have done that, you can disconnect from the internet and play the game offline.

    -

    To play 8 Ball Pool offline on PC, follow these steps:

    -
      -
    1. Launch BlueStacks and open 8 Ball Pool app.
    2. -
    3. Make sure that you have an internet connection and sign in with your Miniclip or Facebook account.
    4. -
    5. Play a few rounds online to save your progress and settings.
    6. -
    7. Disconnect from the internet by turning off your Wi-Fi or mobile data.
    8. -
    9. Go to the game mode menu and select "Practice Offline".
    10. -
    11. Choose your difficulty level and start playing.
    12. -
    -

    That's it, you can now enjoy playing 8 Ball Pool offline on your PC. You can practice your skills, challenge yourself, and have fun without any internet connection.

    -

    Features of 8 Ball Pool PC Offline

    -

    Playing 8 Ball Pool offline on PC has many advantages. You can experience the same features and quality of the game as you would on your mobile device, but on a bigger and better screen. You can also avoid any interruptions or distractions from ads, notifications, or messages. Here are some of the features of 8 Ball Pool PC offline:

    -

    Graphics and Sound Quality

    -

    8 Ball Pool has amazing graphics and sound quality that make you feel like you are playing in a real pool hall. The game has realistic physics and animations that simulate the movement and collision of the balls. The game also has various sound effects and music that enhance the atmosphere and mood of the game.

    -

    Game Modes and Levels

    -

    8 Ball Pool has different game modes and levels that suit your preference and skill level. You can play in the Practice Offline mode, where you can choose from three difficulty levels: Easy, Medium, or Hard. You can also play in the Quick Fire mode, where you have to pot as many balls as you can in a limited time. You can also play in the Pass n Play mode, where you can play with a friend on the same device.

    -

    Customization and Rewards

    -

    8 Ball Pool lets you customize your cue and table with various designs and colors. You can also unlock and collect different cues and tables by playing the game and earning coins. You can also win exclusive items and rewards by completing achievements and missions.

    -

    Tips and Tricks for Playing 8 Ball Pool PC Offline

    -

    If you want to improve your game and become a master of 8 Ball Pool, you need to learn some tips and tricks that can help you win more matches. Here are some of them:

    -

    How to Improve Your Aim and Accuracy

    -

    To improve your aim and accuracy, you need to pay attention to the guidelines that show you the direction and angle of your shot. You can also use the zoom feature to get a closer look at the table and adjust your aim accordingly. You can also practice your shots in the Practice Offline mode until you get familiar with the controls and mechanics of the game.

    -

    How to Use Spin and Power Shots

    -

    To use spin and power shots, you need to use the spin control and the power bar that are located at the bottom of the screen. You can use spin to change the direction or curve of the cue ball after it hits another ball. You can use power to increase or decrease the speed or force of your shot. You can use spin and power shots to make more difficult or creative shots, such as bank shots, trick shots, or combination shots.

    -

    How to Earn Coins and Cash

    -

    To earn coins and cash, you need to play more games and win more matches. You can also earn coins and cash by watching videos, completing offers, or inviting friends to play the game. You can use coins and cash to buy new cues, tables, or items in the game shop.

    -

    Pros and Cons of Playing 8 Ball Pool PC Offline

    -

    Playing 8 Ball Pool offline on PC has its pros and cons. Here are some of them:

    -

    Pros:

    -
      -
    • You don't need an internet connection or data usage to play the game.
    • -
    • You don't have to deal with ads, pop-ups, or interruptions while playing.
    • -
    • You don't have to worry about lag, glitches, or bugs that might affect your gameplay.
    • -
    • You can play on a bigger screen with better resolution and performance.
    • -
    • You can play anytime, anywhere, without any restrictions or limitations.
    • -
    -

    Cons:

    -
      -
    • You can't play online with your friends or other players from around the world.
    • -
    • You can't access the latest updates, features, or events that might be available in the online version.
    • -
    • You can't sync your progress or data with your account or cloud save.
    • -
    • You might miss out on some exclusive items or rewards that might be offered in the online version.
    • -
    • You might get bored or lose interest after playing for a long time.
    • -
    -

    Conclusion

    -

    In conclusion, 8 Ball Pool is a fun and addictive pool game that you can play on your PC offline. You can download it for free using an emulator like BlueStacks and enjoy its features, quality, and gameplay without any internet connection. You can also learn some tips and tricks that can help you improve your skills and win more matches. However, playing 8 Ball Pool offline on PC also has some drawbacks, such as not being able to play online with other players, not getting the latest updates, or not syncing your data. Therefore, you should weigh the pros and cons before deciding to play 8 Ball Pool offline on PC. If you are looking for a fun and challenging pool game that you can play on your PC offline, then you should download 8 Ball Pool today. You will not regret it. It is one of the best pool games ever made and it will keep you entertained for hours. So what are you waiting for? Download 8 Ball Pool PC offline now and start playing!

    FAQs

    -

    Here are some frequently asked questions about 8 Ball Pool PC offline:

    -
      -
    1. Q: Is 8 Ball Pool PC offline safe to download and play?
    2. -
    3. A: Yes, 8 Ball Pool PC offline is safe to download and play. You just need to make sure that you download it from a trusted source, such as the official BlueStacks website or the Google Play Store on BlueStacks. You also need to make sure that you have a reliable antivirus software on your PC to protect it from any malware or viruses.
    4. -
    5. Q: How much space does 8 Ball Pool PC offline require on my PC?
    6. -
    7. A: 8 Ball Pool PC offline requires about 100 MB of space on your PC. However, you also need to consider the space required by the emulator, which is about 500 MB. Therefore, you need to have at least 600 MB of free space on your PC to download and play 8 Ball Pool PC offline.
    8. -
    9. Q: Can I play 8 Ball Pool PC offline with a keyboard and mouse?
    10. -
    11. A: Yes, you can play 8 Ball Pool PC offline with a keyboard and mouse. You can use the arrow keys or the mouse to move the cue and adjust the aim. You can also use the space bar or the left mouse button to shoot. You can also customize the controls according to your preference in the settings menu of BlueStacks.
    12. -
    13. Q: Can I switch from offline to online mode in 8 Ball Pool PC?
    14. -
    15. A: Yes, you can switch from offline to online mode in 8 Ball Pool PC. You just need to connect to the internet and sign in with your Miniclip or Facebook account. You can then access the online features of the game, such as multiplayer mode, tournaments, leaderboards, etc.
    16. -
    17. Q: Can I transfer my progress and data from 8 Ball Pool PC offline to another device?
    18. -
    19. A: No, you cannot transfer your progress and data from 8 Ball Pool PC offline to another device. This is because your progress and data are stored locally on your PC and not on the cloud. Therefore, if you want to play 8 Ball Pool on another device, you need to start from scratch.
    20. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Family Farm Adventure A Gorgeous Farming Game with Creative Puzzles.md b/spaces/fatiXbelha/sd/Family Farm Adventure A Gorgeous Farming Game with Creative Puzzles.md deleted file mode 100644 index 461ab36379852552151a6c1fe6043c9429c2e426..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Family Farm Adventure A Gorgeous Farming Game with Creative Puzzles.md +++ /dev/null @@ -1,128 +0,0 @@ - -

    Family Farm Adventure APK Download: A Guide for Beginners

    -

    If you are looking for a relaxing and enjoyable farming simulator game, you might want to check out Family Farm Adventure. This game lets you harvest various crops, explore mysterious islands, and start your own prosperous farm town. In this article, we will tell you everything you need to know about Family Farm Adventure, how to download and install the APK version of the game, and how to play it like a pro.

    -

    What is Family Farm Adventure?

    -

    Family Farm Adventure is a game developed by Century Games Pte. Ltd., the makers of Dragonscapes Adventure. It is a game that combines farming simulation, story-telling, and puzzle-solving elements. You will join Felicia, a photographer who returns to her grandma's farm on a tropical island, and Toby, an archaeologist who helps her explore the nearby islands. Along the way, you will meet new friends, discover hidden treasures, and restore the farm to its former glory.

    -

    family farm adventure apk download


    Download >>> https://urllie.com/2uNxXR



    -

    A farming simulator game with a beautiful story and exploration

    -

    One of the main attractions of Family Farm Adventure is its immersive and heartwarming story. You will get to know Felicia's family history, her grandma's secrets, and her own dreams. You will also help her solve puzzles and mysteries that will reveal more about the island's culture and history. The game has a lot of surprises, romance, and friendship that will keep you hooked.

    -

    Another aspect of the game that makes it stand out is its exploration feature. You will not only manage your farm, but also travel to different islands with Felicia and Toby. Each island has its own theme, scenery, and challenges. You will need to clear obstacles, collect resources, and solve puzzles to unlock new areas and items. You will also encounter friendly and quirky villagers, as well as wild animals that you can interact with.

    -

    The main features of the game

    -

    Family Farm Adventure has many features that make it fun and engaging. Here are some of them:

    -
      -
    • Story. Immerse yourself in the beautiful story in this simulator game, full of mysteries, surprises, romance, and friendship. Solve puzzles to continue the story and learn more about the farm town.
    • -
    • Explorations. Leave your town and explore mysterious tropical islands with the fearless photographer Felicia and the bright archaeologist Toby and help them solve puzzles along the way. Bring the treasures back to the farm.
    • -
    • Decorations. Decorate your flower farm! Restore houses, decorations, and centerpieces that are essential for the Festival of Flowers. Finish all preparations for this festival and celebrate it with everyone on the farm.
    • -
    • Farming. Start your own farm on a tropical island. Harvest crops, raise farm animals and produce food with your cooking skills. Turn your farm in this simulator into a cooking powerhouse.
    • -
    • Adventures. Complete challenging puzzles on your travels through these mysterious islands. Take a break from your adventures by checking on the animals on your farm.
    • -
    • People and Animals. Meet friendly and peculiar villagers, as well as quirky wild animals. Ask them to come visit your farm and do some cooking together.
    • -
    • Treasures. Discover hidden treasures and rare ancient artifacts by solving creative puzzles. Trade them in for all kinds of bonuses that will help you on your farm. Some puzzles will lead you to unexpected rewards to decorate your town!
    • -
    -

    How to download Family Farm Adventure APK?

    -

    If you want to play Family Farm Adventure on your Android device, you have two options: you can either download it from Google Play Store or from an APK website. The APK version is a file that contains all the data and code of the game, which you can install manually on your device. Here are the steps to download and install the APK version of Family Farm Adventure:

    -

    The steps to download and install the game on Android devices

    -
      -
    1. Find a reliable APK website. There are many websites that offer APK files for various games and apps, but not all of them are safe and trustworthy. You should look for a website that has positive reviews, ratings, and feedback from other users. Some examples of reputable APK websites are APKPure, APKMirror, and APKMonk.
    2. -
    3. Download the APK file. Once you have found a website that offers Family Farm Adventure APK, you can download it by clicking on the download button or link. The file size is about 100 MB, so make sure you have enough space on your device and a stable internet connection.
    4. -
    5. Enable unknown sources. Before you can install the APK file, you need to allow your device to install apps from unknown sources. To do this, go to your device's settings, then security, then unknown sources. Turn on the toggle or check the box to enable this option.
    6. -
    7. Install the APK file. After you have enabled unknown sources, you can install the APK file by tapping on it or opening it with a file manager app. Follow the instructions on the screen to complete the installation process.
    8. -
    9. Launch the game. Once the installation is done, you can launch the game by tapping on its icon on your home screen or app drawer. Enjoy playing Family Farm Adventure!
    10. -
    -

    The benefits of downloading the APK version

    -

    There are some benefits of downloading the APK version of Family Farm Adventure instead of the Google Play Store version. Here are some of them:

    -
      -
    • Access to the latest updates. Sometimes, the Google Play Store version of the game may not have the latest updates or features that the developers have released. The APK version, however, is usually updated faster and more frequently, so you can enjoy the newest content and improvements as soon as possible.
    • -
    • Bypass regional restrictions. Some games may not be available in certain countries or regions due to various reasons. If you want to play Family Farm Adventure but it is not available in your location, you can download the APK version and play it without any restrictions.
    • -
    • Save storage space. The APK version of Family Farm Adventure may be smaller in size than the Google Play Store version, which means it will take up less space on your device. This can help you save storage space and improve your device's performance.
    • -
    -

    How to play Family Farm Adventure?

    -

    Now that you have downloaded and installed Family Farm Adventure, you may wonder how to play it and what are some tips and tricks to make the most out of it. Here are some basic gameplay mechanics and strategies that will help you build a thriving farm and explore all the islands.

    -

    The basic gameplay mechanics and tips

    -

    The game is divided into two main modes: farming and exploration. In farming mode, you will manage your farm town by planting crops, raising animals, producing food, decorating buildings, and completing quests. In exploration mode, you will travel to different islands with Felicia and Toby by using a boat or a hot air balloon. You will clear obstacles, collect resources, solve puzzles, and discover new areas and items.

    -

    Here are some tips for playing Family Farm Adventure:

    -

    family farm adventure game apk free download
    -family farm adventure mod apk unlimited money and gems
    -family farm adventure offline apk download
    -family farm adventure latest version apk download
    -family farm adventure hack apk download
    -family farm adventure android game download
    -family farm adventure simulator game apk
    -family farm adventure update apk download
    -family farm adventure cheats apk download
    -family farm adventure full version apk download
    -family farm adventure premium apk download
    -family farm adventure cracked apk download
    -family farm adventure farming game apk
    -family farm adventure puzzle game apk
    -family farm adventure story game apk
    -family farm adventure modded apk download
    -family farm adventure unlocked apk download
    -family farm adventure pro apk download
    -family farm adventure patched apk download
    -family farm adventure paid apk download
    -family farm adventure no ads apk download
    -family farm adventure unlimited coins and diamonds apk
    -family farm adventure old version apk download
    -family farm adventure new version apk download
    -family farm adventure beta version apk download
    -family farm adventure original apk download
    -family farm adventure safe apk download
    -family farm adventure virus free apk download
    -family farm adventure best farming simulator game apk
    -family farm adventure fun and relaxing game apk
    -family farm adventure tropical island game apk
    -family farm adventure flower festival game apk
    -family farm adventure decoration game apk
    -family farm adventure cooking game apk
    -family farm adventure exploration game apk
    -family farm adventure treasure hunt game apk
    -family farm adventure romance game apk
    -family farm adventure friendship game apk
    -family farm adventure mystery game apk
    -family farm adventure 3d graphics game apk
    -family farm adventure high quality game apk
    -family farm adventure low mb game apk
    -family farm adventure easy to play game apk
    -family farm adventure challenging puzzles game apk
    -family farm adventure cute animals game apk
    -family farm adventure colorful characters game apk

    -
      -
    • Follow the story quests. The story quests will guide you through the game's plot and introduce you to new characters and features. They will also reward you with coins, gems, energy, and other items that will help you progress faster.
    • -
    • Collect daily rewards and bonuses. Every day, you can collect free rewards and bonuses by logging in, watching ads, spinning a wheel, opening chests, and completing tasks. These rewards can include coins, gems, energy, boosters, decorations, and more.
    • -
    • Join a club or create your own. A club is a group of players who can chat with each other, help each other with requests, exchange gifts, and participate in club events. Joining a club or creating your own can make your game more fun and social.
    • -
    • Upgrade your buildings and tools. As you play the game, you will unlock new buildings and tools that will improve your farm town's productivity and efficiency. You can upgrade them by using coins or gems or by collecting specific materials.
    • -
    • Use boosters wisely. Boosters are special items that can help you in various ways in both and error methods, but avoid making too many wrong moves, as they may cost you energy or time.
    • -
    • Have fun and be creative. The most important strategy for playing Family Farm Adventure is to have fun and be creative. The game gives you a lot of freedom and options to customize your farm town and your island adventures. You can choose the crops, animals, buildings, and decorations that suit your style and taste. You can also mix and match different items and colors to create unique combinations. You can also share your farm town and your island discoveries with other players and see their creations as well.
    • -
    -

    Conclusion

    -

    Family Farm Adventure is a game that will keep you entertained and relaxed for hours. It is a game that combines farming simulation, story-telling, and puzzle-solving elements in a beautiful tropical setting. You can download the APK version of the game from a reliable website and install it on your Android device easily. You can also follow the tips and strategies we have shared in this article to make the most out of the game. We hope you enjoy playing Family Farm Adventure and have a wonderful time!

    -

    FAQs

    -

    Here are some frequently asked questions about Family Farm Adventure:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Family Farm Adventure free to play?Yes, Family Farm Adventure is free to download and play. However, it also offers in-app purchases that can enhance your gaming experience.
    Can I play Family Farm Adventure offline?No, Family Farm Adventure requires an internet connection to play. You need to be online to access all the features and content of the game.
    How can I contact the developers or report a problem?You can contact the developers or report a problem by using the in-game support feature or by sending an email to support@centurygame.com.
    How can I get more gems or energy in Family Farm Adventure?You can get more gems or energy by completing quests, watching ads, spinning a wheel, opening chests, participating in events, or buying them with real money.
    How can I transfer my game progress to another device?You can transfer your game progress to another device by connecting your game account to Facebook or Google Play Games. Then, you can log in with the same account on another device and continue playing where you left off.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatgpt.py b/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatgpt.py deleted file mode 100644 index eef8fbf0b43f30b915f770f4bc54120c84ebd092..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_chatgpt.py +++ /dev/null @@ -1,285 +0,0 @@ -# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目 - -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import json -import time -import gradio as gr -import logging -import traceback -import requests -import importlib - -# config_private.py放自己的秘密如API和代理网址 -# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件 -from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc -proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \ - get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY') - -timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ - '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。' - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - stream_response = response.iter_lines() - result = '' - while True: - try: chunk = next(stream_response).decode() - except StopIteration: - break - except requests.exceptions.ConnectionError: - chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。 - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode() - if "reduce the length" in error_msg: - raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) - else: - raise RuntimeError("OpenAI拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break # api2d 正常完成 - json_data = json.loads(chunk.lstrip('data:'))['choices'][0] - delta = json_data["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if json_data['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if is_any_api_key(inputs): - chatbot._cookies['api_key'] = inputs - chatbot.append(("输入已识别为openai的api_key", what_keys(inputs))) - yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面 - return - elif not is_any_api_key(chatbot._cookies['api_key']): - chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")) - yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面 - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - try: - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) - except RuntimeError as e: - chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") - yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 - return - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=True - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS);break - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - - is_head_of_the_stream = True - if stream: - stream_response = response.iter_lines() - while True: - try: - chunk = next(stream_response) - except StopIteration: - # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里 - from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk.decode())}") - yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk.decode()) # 刷新界面 - return - - # print(chunk.decode()[6:]) - if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - try: - chunk_decoded = chunk.decode() - # 前者API2D的 - if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0): - # 判定为数据流的结束,gpt_replying_buffer也写完了 - logging.info(f'[response] {gpt_replying_buffer}') - break - # 处理数据流的主体 - chunkjson = json.loads(chunk_decoded[6:]) - status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}" - # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出 - gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"] - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - chunk_decoded = chunk.decode() - error_msg = chunk_decoded - if "reduce the length" in error_msg: - if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出 - history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], - max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 - chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") - # history = [] # 清除历史 - elif "does not exist" in error_msg: - chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.") - elif "Incorrect API key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.") - elif "exceeded your current quota" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.") - elif "bad forward key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.") - elif "Not enough point" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.") - else: - from toolbox import regular_txt_to_markdown - tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}") - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - -def generate_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 - """ - if not is_any_api_key(llm_kwargs['api_key']): - raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。") - - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api_key}" - } - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - if what_gpt_answer["content"] == timeout_bot_msg: continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'].strip('api2d-'), - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return headers,payload - - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Alto 39s Adventure Hack Apk Download !NEW!.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Alto 39s Adventure Hack Apk Download !NEW!.md deleted file mode 100644 index 66805183313cbd481dd5e88ca79646c49ad18bcf..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Alto 39s Adventure Hack Apk Download !NEW!.md +++ /dev/null @@ -1,75 +0,0 @@ - -

    Alto's Adventure Hack APK Download: How to Get Unlimited Coins and Wingsuits in the Endless Snowboarding Game

    -

    Are you a fan of Alto's Adventure, the beautiful and relaxing endless runner snowboarding game for Android devices? Do you want to enjoy the game without any limitations or interruptions? If yes, then you should try Alto's Adventure hack apk download, which will give you unlimited coins and wingsuits to unlock new items and abilities in the game. In this article, we will tell you what is Alto's Adventure, why is it popular, what are the features and benefits of the hack apk version, and how to download and install it safely and easily on your device.

    -

    What is Alto's Adventure and why is it popular?

    -

    Alto's Adventure is a 2015 endless runner snowboarding game developed by Team Alto and published by Snowman (iOS) and Noodlecake Studios (Android). The game features a minimalist and evocative art style, a fluid and exhilarating physics-based gameplay, a dynamic weather system, an original music score, and a simple one-button control scheme. The game has received universal acclaim from critics and players for its aesthetics, atmosphere, design, and sound.

    -

    alto 39;s adventure hack apk download


    Download File ✸✸✸ https://gohhs.com/2uPmjI



    -

    The game follows Alto, a young shepherd who embarks on an adventure across the alpine hills of his native wilderness, along with his friends Maya, Paz, Izel, Felipe, and Tupa. Along the way, they have to rescue runaway llamas, grind on rooftops and flag lines, leap over chasms and rocks, outwit the mountain elders, collect coins and power-ups, perform tricks and combos, and acquire the wingsuit for an entirely new gameplay dynamic. The game has 180 handcrafted goals that challenge the player's skills and creativity.

    -

    The game has some pros and cons that make it appealing or frustrating for different players. Some of the pros are:

    -
      -
    • The game is easy to learn but hard to master.
    • -
    • The game is relaxing but also exciting.
    • -
    • The game is addictive but also rewarding.
    • -
    • The game is beautiful but also realistic.
    • -
    • The game is varied but also consistent.
    • -
    -

    Some of the cons are:

    -
      -
    • The game can be repetitive after a while.
    • -
    • The game can be frustrating when crashing or failing goals.
    • -
    • The game can be expensive when buying items or power-ups.
    • -
    • The game can be buggy or lag

      on some devices or versions.

    • -
    • The game can be boring for some players who prefer more action or complexity.
    • -
    -

    What are the features and benefits of the hack apk version?

    -

    If you want to enjoy Alto's Adventure without any of the cons mentioned above, you should try the hack apk version, which will give you some amazing features and benefits that will enhance your gaming experience. Some of these features and benefits are:

    -

    Unlimited coins and wingsuits to unlock new items and abilities

    -

    With the hack apk version, you will get unlimited coins and wingsuits in the game, which means you can buy any item or power-up you want without worrying about the cost. You can also unlock all the characters and their abilities, such as Maya's speed, Paz's strength, Izel's gadgets, Felipe's double jump, and Tupa's thunder. You can also use the wingsuit as much as you want, which will allow you to fly through the air, avoid obstacles, and perform amazing stunts.

    -

    No ads, no root, no virus, and no in-app purchases

    -

    With the hack apk version, you will not have to deal with any annoying ads that interrupt your gameplay or waste your time. You will also not have to root your device or risk getting a virus or malware from downloading the hack apk file. You will also not have to spend any real money on in-app purchases, as everything is already unlocked and available for you. The hack apk version is 100% safe, secure, and free to use.

    -

    Compatible with all Android devices and versions

    -

    With the hack apk version, you will not have to worry about compatibility issues with your device or version. The hack apk version works with all Android devices and versions, whether they are old or new, low-end or high-end, rooted or unrooted. You just need to have enough storage space and a stable internet connection to download and install the hack apk file. The hack apk version is also updated regularly to match the official version of the game and fix any bugs or glitches.

    -

    How to download and install the hack apk version safely and easily?

    -

    If you are interested in trying the hack apk version of Alto's Adventure, you will need to follow some simple steps to download and install it on your device. Here are the steps:

    -

    -

    The steps to download the hack apk file from a reliable source

    -
      -
    1. Go to a reliable website that offers the hack apk file for Alto's Adventure, such as [Alto's Adventure Hack APK Download].
    2. -
    3. Click on the download button and wait for the download to start.
    4. -
    5. Once the download is complete, locate the hack apk file in your device's file manager or downloads folder.
    6. -
    -

    The steps to enable unknown sources and install the hack apk file on your device

    -
      -
    1. Before installing the hack apk file, you need to enable unknown sources on your device. To do this, go to your device's settings, then security, then unknown sources, and toggle it on.
    2. -
    3. After enabling unknown sources, go back to the hack apk file and tap on it.
    4. -
    5. A pop-up window will appear asking you to install the app. Tap on install and wait for the installation to finish.
    6. -
    -

    The steps to launch the game and enjoy the hack features

    -
      -
    1. Once the installation is done, you can launch the game by tapping on its icon on your home screen or app drawer.
    2. -
    3. You will see a message saying that you have successfully installed Alto's Adventure hack apk.
    4. -
    5. You can now enjoy the game with unlimited coins and wingsuits, no ads, no root, no virus, and no in-app purchases.
    6. -
    -

    Conclusion

    -

    In conclusion, Alto's Adventure is a wonderful endless runner snowboarding game that offers a captivating gameplay, stunning graphics, soothing sound, and simple controls. However, if you want to enjoy the game without any limitations or interruptions, you should try Alto's Adventure hack apk download, which will give you unlimited coins and wingsuits to unlock new items and abilities in the game. You will also get rid of ads, root, virus, and in-app purchases with this hack apk version. You just need to follow some easy steps to download and install it on your device safely and easily. So what are you waiting for? Download Alto's Adventure hack apk now and have fun!

    -

    We hope you found this article helpful and informative. If you have any questions or feedback about Alto's Adventure hack apk download, please feel free to leave them in the comments section below. We would love to hear from you and answer your queries. Thank you for reading and happy snowboarding!

    -

    FAQs

    -

    Is Alto's Adventure hack apk legal and safe?

    -

    Alto's Adventure hack apk is not legal, as it violates the terms and conditions of the original game. However, it is safe to use, as it does not contain any virus or malware that can harm your device or data. You just need to download it from a reliable source and enable unknown sources on your device before installing it.

    -

    How can I update Alto's Adventure hack apk?

    -

    Alto's Adventure hack apk is updated regularly to match the official version of the game and fix any bugs or glitches. You can check for updates on the website where you downloaded the hack apk file, or you can enable automatic updates on your device settings. You can also uninstall the hack apk version and install the latest one from scratch.

    -

    What are some tips and tricks to master Alto's Adventure?

    -

    Some of the tips and tricks to master Alto's Adventure are:

    -
      -
    • Use the wingsuit wisely, as it can help you avoid obstacles, collect coins, and perform tricks.
    • -
    • Try to land smoothly, as landing on your back or head will cause you to crash.
    • -
    • Use power-ups such as hover feather, magnet, lotus, and chasm rescue to boost your performance.
    • -
    • Complete goals to level up and unlock new items and abilities.
    • -
    • Explore different biomes such as forests, deserts, temples, and villages to discover new secrets and surprises.
    • -
    -

    How can I play Alto's Adventure on PC or iOS devices?

    -

    Alto's Adventure is available for PC and iOS devices as well as Android devices. You can download it from the official website [Alto's Adventure] or from the respective app stores [Steam] [App Store]. However, you will not be able to use the hack apk version on these platforms, as it is only compatible with Android devices.

    -

    What is the difference between Alto's Adventure and Alto's Odyssey?

    -

    Alto's Odyssey is the sequel to Alto's Adventure, released in 2018. It features a new desert setting, new characters, new items, new abilities, new biomes, new weather effects, new music, and new challenges. It also introduces wall riding, balloon bouncing, water sliding, and wind surfing mechanics. However, it retains the same core gameplay, art style, and sound of Alto's Adventure.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Message and Customize Your Chat Experience.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Message and Customize Your Chat Experience.md deleted file mode 100644 index 35639a9be22e9e1ac31429166a5e7fc575d3ee47..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Messenger Message and Customize Your Chat Experience.md +++ /dev/null @@ -1,178 +0,0 @@ -
    -

    Download Messenger Message: How to Save Your Conversations on PC and Mobile

    -

    Messenger is a popular communication app that lets you send text, voice, video, and group messages to your friends and family. But what if you want to save your conversations for future reference, backup, or sharing? In this article, we will show you how to download messenger message on PC and mobile devices, using different methods and tools. We will also share some tips and tricks for managing your downloaded messages.

    -

    download messenger message


    DOWNLOADhttps://gohhs.com/2uPnNc



    -

    Introduction

    -

    What is Messenger and why you might want to download your messages

    -

    Messenger is a free all-in-one communication app that allows you to connect with your Facebook friends, Instagram contacts, and other people across different platforms. You can use Messenger to send unlimited text, voice, video, and group messages, as well as make calls, watch videos together, play games, send money, chat with businesses, and more.

    -

    There are many reasons why you might want to download your messages from Messenger. For example, you might want to:

    -
      -
    • Keep a record of your important conversations, such as personal or professional chats, receipts, invoices, contracts, etc.
    • -
    • Backup your messages in case you lose access to your account or device, or in case Messenger deletes or modifies your messages.
    • -
    • Share your messages with someone else, such as a friend, family member, lawyer, or colleague.
    • -
    • Print your messages for offline use or documentation purposes.
    • -
    • Free up space on your device by deleting your messages after downloading them.
    • -
    -

    Whatever your reason is, downloading your messages from Messenger is not difficult. You just need to follow some simple steps and use some tools that we will explain in the next sections.

    -

    How to download messenger message on PC

    -

    If you want to download your messages from Messenger on your PC, you have three main options: using the desktop app, using the web browser, or using a third-party software. Let's see how each option works.

    -

    Using the desktop app

    -

    The easiest way to download your messages from Messenger on your PC is to use the official desktop app. You can download it for free from [here](^2^). Once you install it on your PC, you can log in with your Facebook account and access all your conversations. To download a message from a specific chat, follow these steps:

    -
      -
    1. Open the chat that contains the message you want to download.
    2. -
    3. Right-click on the message and select "Save as...".
    4. -
    5. Choose a location on your PC where you want to save the message.
    6. -
    7. The message will be saved as an image file (.png) with the date and time of the message.
    8. -
    -

    Note that this method only allows you to download one message at a time. If you want to download multiple messages or an entire conversation, you will need to use another method.

    -

    How to download messenger message history
    -Download messenger message backup software
    -Download messenger message recovery tool
    -Download messenger message stickers and gifs
    -Download messenger message photos and videos
    -Download messenger message transcripts and logs
    -Download messenger message attachments and files
    -Download messenger message voice and video messages
    -Download messenger message from Facebook
    -Download messenger message from Instagram
    -Download messenger message from WhatsApp
    -Download messenger message from Google Messages
    -Download messenger message from Signal
    -Download messenger message from Telegram
    -Download messenger message from Viber
    -Download messenger message from Line
    -Download messenger message from WeChat
    -Download messenger message from Kik
    -Download messenger message from Skype
    -Download messenger message from Snapchat
    -Download messenger message from Discord
    -Download messenger message from Slack
    -Download messenger message from Zoom
    -Download messenger message from Microsoft Teams
    -Download messenger message from Hangouts
    -Download messenger message to PC
    -Download messenger message to Mac
    -Download messenger message to iPhone
    -Download messenger message to Android
    -Download messenger message to iPad
    -Download messenger message to iPod Touch
    -Download messenger message to Kindle Fire
    -Download messenger message to Chromebook
    -Download messenger message to external hard drive
    -Download messenger message to USB flash drive
    -Download messenger message to SD card
    -Download messenger message to cloud storage
    -Download messenger message to email account
    -Download messenger message to Dropbox account
    -Download messenger message to Google Drive account
    -Download messenger message to OneDrive account
    -Download messenger message to iCloud account
    -Download messenger message to Box account
    -Why download messenger messages?
    -Benefits of downloading messenger messages
    -Tips for downloading messenger messages
    -Best practices for downloading messenger messages
    -Reviews of download messenger messages software

    -

    Using the web browser

    -

    Another way to download your messages from Messenger on your PC is to use the web browser. You can access Messenger from any web browser by going to [this link](^3^). You will need to log in with your Facebook account and then you can see all your conversations. To download a message from a specific chat, follow these steps:ol> -

  13. Right-click on the message and select "Copy image".
  14. -
  15. Open a photo editing software, such as Paint, Photoshop, or GIMP.
  16. -
  17. Paste the image and save it as a file (.png, .jpg, .bmp, etc.) with the date and time of the message.
  18. -
-

Note that this method also only allows you to download one message at a time. If you want to download multiple messages or an entire conversation, you will need to use another method.

-

Using a third-party software

-

The third way to download your messages from Messenger on your PC is to use a third-party software. There are many software tools that can help you download your messages from Messenger in bulk, such as [Backuptrans], [AnyTrans], [iMazing], and [FonePaw]. These tools usually require you to connect your mobile device to your PC via USB or Wi-Fi, and then scan your Messenger data and export it to your PC. To download your messages from Messenger using a third-party software, follow these general steps:

-
    -
  1. Download and install the software of your choice on your PC.
  2. -
  3. Connect your mobile device to your PC via USB or Wi-Fi.
  4. -
  5. Launch the software and select "Messenger" or "Social Apps" from the menu.
  6. -
  7. Choose the conversations or messages you want to download and click on "Export" or "Backup".
  8. -
  9. Select a format and a location for your downloaded messages. You can usually choose between formats such as HTML, CSV, TXT, PDF, etc.
  10. -
  11. The software will start downloading your messages and save them on your PC.
  12. -
-

Note that this method may require you to pay for the software or register for a free trial. You should also check the privacy policy and terms of service of the software before using it.

-

How to download messenger message on mobile

-

If you want to download your messages from Messenger on your mobile device, you have three main options: using the mobile app, using the web browser, or using a third-party software. Let's see how each option works.

-

Using the mobile app

-

The easiest way to download your messages from Messenger on your mobile device is to use the official mobile app. You can download it for free from [here] for Android devices and [here] for iOS devices. Once you install it on your device, you can log in with your Facebook account and access all your conversations. To download a message from a specific chat, follow these steps:

-
    -
  1. Open the chat that contains the message you want to download.
  2. -
  3. Tap and hold on the message and select "Save".
  4. -
  5. The message will be saved in your device's gallery as an image file (.png) with the date and time of the message.
  6. -
-

Note that this method only allows you to download one message at a time. If you want to download multiple messages or an entire conversation, you will need to use another method.

Using the web browser

-

Another way to download your messages from Messenger on your mobile device is to use the web browser. You can access Messenger from any web browser by going to [this link]. You will need to log in with your Facebook account and then you can see all your conversations. To download a message from a specific chat, follow these steps:

-
    -
  1. Tap and hold on the message and select "Copy".
  2. -
  3. Open a note-taking app, such as Google Keep, Evernote, or OneNote.
  4. -
  5. Paste the message and save it as a note with the date and time of the message.
  6. -
-

Note that this method also only allows you to download one message at a time. If you want to download multiple messages or an entire conversation, you will need to use another method.

-

Using a third-party software

-

The third way to download your messages from Messenger on your mobile device is to use a third-party software. There are many software tools that can help you download your messages from Messenger in bulk, such as [Dr.Fone], [Syncios], [EaseUS], and [iSkysoft]. These tools usually require you to connect your mobile device to your PC via USB or Wi-Fi, and then scan your Messenger data and export it to your PC or device. To download your messages from Messenger using a third-party software, follow these general steps:

-
    -
  1. Download and install the software of your choice on your PC or device.
  2. -
  3. Connect your mobile device to your PC or device via USB or Wi-Fi.
  4. -
  5. Launch the software and select "Messenger" or "Social Apps" from the menu.
  6. -
  7. Choose the conversations or messages you want to download and click on "Export" or "Backup".
  8. -
  9. Select a format and a location for your downloaded messages. You can usually choose between formats such as HTML, CSV, TXT, PDF, etc.
  10. -
  11. The software will start downloading your messages and save them on your PC or device.
  12. -
-

Note that this method may require you to pay for the software or register for a free trial. You should also check the privacy policy and terms of service of the software before using it.

-

Tips and tricks for downloading messenger message

-

Now that you know how to download your messages from Messenger on PC and mobile devices, here are some tips and tricks that can help you manage your downloaded messages better:

-

How to backup your messages to the cloud

-

If you want to backup your messages to the cloud, you can use services such as Google Drive, Dropbox, iCloud, or OneDrive. These services allow you to upload your downloaded messages to their servers and access them from any device. To backup your messages to the cloud, follow these general steps:

-
    -
  1. Create an account on the service of your choice and install their app on your PC or device.
  2. -
  3. Open the app and sign in with your account.
  4. -
  5. Select the folder where you saved your downloaded messages and click on "Upload" or "Sync".
  6. -
  7. The app will start uploading your messages to the cloud and you can access them from any device with an internet connection.
  8. -
-

Note that this method may require you to pay for extra storage space or bandwidth depending on the size of your messages.

-

How to export your messages as PDF or other formats

-

If you want to export your messages as PDF or other formats, you can use online converters such as [Zamzar], [Online-Convert], [PDF Candy], or [Smallpdf]. These converters allow you to upload your downloaded messages and convert them to different formats such as PDF, DOCX, TXT, JPG, etc. To export your messages as PDF or other formats, follow these general steps:

-
    -
  1. Go to the website of the converter of your choice and select the format you want to convert your messages to.
  2. -
  3. Click on "Choose files" or "Upload files" and select the files that contain your downloaded messages.
  4. -
  5. Click on "Convert" or "Start" and wait for the conversion process to finish.
  6. -
  7. Download the converted files to your PC or device or save them to the cloud.
  8. -
-

Note that this method may have some limitations on the file size, quality, or number of conversions depending on the converter.

How to delete your messages after downloading them

-

If you want to delete your messages from Messenger after downloading them, you can do so from the app or the web browser. This can help you free up space on your device or protect your privacy. To delete your messages from Messenger, follow these steps:

-
    -
  1. Open the chat that contains the messages you want to delete.
  2. -
  3. Tap and hold on the message and select "Remove".
  4. -
  5. Choose whether you want to remove the message for yourself or for everyone in the chat.
  6. -
  7. Confirm your choice and the message will be deleted.
  8. -
-

Note that this method only allows you to delete one message at a time. If you want to delete multiple messages or an entire conversation, you can swipe left on the chat and select "Delete".

-

Conclusion

-

Summary of the main points

-

In this article, we have shown you how to download messenger message on PC and mobile devices, using different methods and tools. We have also shared some tips and tricks for managing your downloaded messages better. Here are the main points we have covered:

-
    -
  • Messenger is a free all-in-one communication app that lets you send text, voice, video, and group messages to your friends and family.
  • -
  • You might want to download your messages from Messenger for various reasons, such as keeping a record, backing up, sharing, printing, or deleting them.
  • -
  • You can download your messages from Messenger on PC using the desktop app, the web browser, or a third-party software.
  • -
  • You can download your messages from Messenger on mobile using the mobile app, the web browser, or a third-party software.
  • -
  • You can backup your messages to the cloud, export them as PDF or other formats, or delete them after downloading them.
  • -
-

Call to action and final thoughts

-

We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

-

If you want to learn more about Messenger and how to use it effectively, you can check out our other articles on [this website]. You can also subscribe to our newsletter and get the latest updates on Messenger and other communication apps.

-

Thank you for reading and happy messaging!

-

Frequently Asked Questions

-
    -
  1. Can I download my messages from Messenger without logging in?
  2. -

    No, you need to log in with your Facebook account to access your messages from Messenger. If you don't have a Facebook account, you can create one for free [here].

    -
  3. Can I download my messages from Messenger without installing any software?
  4. -

    Yes, you can use the web browser method to download your messages from Messenger without installing any software. However, this method only allows you to download one message at a time. If you want to download multiple messages or an entire conversation, you will need to use another method.

    -
  5. Can I download my messages from Messenger in bulk?
  6. -

    Yes, you can use a third-party software method to download your messages from Messenger in bulk. However, this method may require you to pay for the software or register for a free trial. You should also check the privacy policy and terms of service of the software before using it.

    -
  7. Can I download my messages from Messenger as audio or video files?
  8. -

    No, you can only download your messages from Messenger as image files (.png) or text files (.html, .csv, .txt, .pdf, etc.). If you want to save your voice or video messages as audio or video files, you will need to use a screen recorder or a video downloader tool.

    -
  9. Can I download my messages from Messenger on another device?
  10. -

    Yes, you can download your messages from Messenger on another device by logging in with your Facebook account and using the same methods as described above. You can also backup your messages to the cloud and access them from any device with an internet connection.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fffffu/bing/src/components/chat-notification.tsx b/spaces/fffffu/bing/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
- 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
- ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
- 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
- ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
-
-
-
-
- error - {getAction(message.error, () => bot.resetConversation())} -
-
-
-
-
- ) -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/call-bind/test/callBound.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/call-bind/test/callBound.js deleted file mode 100644 index 209ce3cc3b267b7ee6448f591590b312ee35721a..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/call-bind/test/callBound.js +++ /dev/null @@ -1,55 +0,0 @@ -'use strict'; - -var test = require('tape'); - -var callBound = require('../callBound'); - -test('callBound', function (t) { - // static primitive - t.equal(callBound('Array.length'), Array.length, 'Array.length yields itself'); - t.equal(callBound('%Array.length%'), Array.length, '%Array.length% yields itself'); - - // static non-function object - t.equal(callBound('Array.prototype'), Array.prototype, 'Array.prototype yields itself'); - t.equal(callBound('%Array.prototype%'), Array.prototype, '%Array.prototype% yields itself'); - t.equal(callBound('Array.constructor'), Array.constructor, 'Array.constructor yields itself'); - t.equal(callBound('%Array.constructor%'), Array.constructor, '%Array.constructor% yields itself'); - - // static function - t.equal(callBound('Date.parse'), Date.parse, 'Date.parse yields itself'); - t.equal(callBound('%Date.parse%'), Date.parse, '%Date.parse% yields itself'); - - // prototype primitive - t.equal(callBound('Error.prototype.message'), Error.prototype.message, 'Error.prototype.message yields itself'); - t.equal(callBound('%Error.prototype.message%'), Error.prototype.message, '%Error.prototype.message% yields itself'); - - // prototype function - t.notEqual(callBound('Object.prototype.toString'), Object.prototype.toString, 'Object.prototype.toString does not yield itself'); - t.notEqual(callBound('%Object.prototype.toString%'), Object.prototype.toString, '%Object.prototype.toString% does not yield itself'); - t.equal(callBound('Object.prototype.toString')(true), Object.prototype.toString.call(true), 'call-bound Object.prototype.toString calls into the original'); - t.equal(callBound('%Object.prototype.toString%')(true), Object.prototype.toString.call(true), 'call-bound %Object.prototype.toString% calls into the original'); - - t['throws']( - function () { callBound('does not exist'); }, - SyntaxError, - 'nonexistent intrinsic throws' - ); - t['throws']( - function () { callBound('does not exist', true); }, - SyntaxError, - 'allowMissing arg still throws for unknown intrinsic' - ); - - /* globals WeakRef: false */ - t.test('real but absent intrinsic', { skip: typeof WeakRef !== 'undefined' }, function (st) { - st['throws']( - function () { callBound('WeakRef'); }, - TypeError, - 'real but absent intrinsic throws' - ); - st.equal(callBound('WeakRef', true), undefined, 'allowMissing arg avoids exception'); - st.end(); - }); - - t.end(); -}); diff --git a/spaces/firzaelbuho/rvc-models/config.py b/spaces/firzaelbuho/rvc-models/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/firzaelbuho/rvc-models/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/fsdl2022emotion/meme-manipulation-gradio-space/emotion_synthesizer/models/model_linear_2d.py b/spaces/fsdl2022emotion/meme-manipulation-gradio-space/emotion_synthesizer/models/model_linear_2d.py deleted file mode 100644 index 87414b1a8734b5f3ec300dd0c9d02034915af1a6..0000000000000000000000000000000000000000 --- a/spaces/fsdl2022emotion/meme-manipulation-gradio-space/emotion_synthesizer/models/model_linear_2d.py +++ /dev/null @@ -1,212 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -import sys - - -class ResidualBlock(nn.Module): - """Residual Block with instance normalization.""" - - def __init__(self, dim_in, dim_out): - super(ResidualBlock, self).__init__() - self.main = nn.Sequential( - nn.Conv2d(dim_in, dim_out, kernel_size=3, stride=1, padding=1, bias=False), - nn.InstanceNorm2d(dim_out, affine=True, track_running_stats=True), - nn.ReLU(inplace=True), - nn.Conv2d(dim_out, dim_out, kernel_size=3, stride=1, padding=1, bias=False), - nn.InstanceNorm2d(dim_out, affine=True, track_running_stats=True), - ) - - def forward(self, x): - return x + self.main(x) - - -class Generator(nn.Module): - """Generator network.""" - - def __init__(self, device, conv_dim=64, c_dim=8, repeat_num=6, n_r=5): - super(Generator, self).__init__() - - self.nr = n_r - self.c_dim = c_dim - self.device = device - # the six axes, real weight are 6X2 - self.axes = nn.Linear(2, c_dim - 1) - # make the weight small so that they can easily modified by gradient descend - self.axes.weight.data = self.axes.weight.data * 0.0001 - - layers = [] - layers.append( - nn.Conv2d(3 + 2, conv_dim, kernel_size=7, stride=1, padding=3, bias=False) - ) - layers.append( - nn.InstanceNorm2d(conv_dim, affine=True, track_running_stats=True) - ) - layers.append(nn.ReLU(inplace=True)) - - # Down-sampling layers. - curr_dim = conv_dim - for i in range(2): - layers.append( - nn.Conv2d( - curr_dim, - curr_dim * 2, - kernel_size=4, - stride=2, - padding=1, - bias=False, - ) - ) - layers.append( - nn.InstanceNorm2d(curr_dim * 2, affine=True, track_running_stats=True) - ) - layers.append(nn.ReLU(inplace=True)) - curr_dim = curr_dim * 2 - - # Bottleneck layers. - for i in range(repeat_num): - layers.append(ResidualBlock(dim_in=curr_dim, dim_out=curr_dim)) - - # Up-sampling layers. - for i in range(2): - layers.append( - nn.ConvTranspose2d( - curr_dim, - curr_dim // 2, - kernel_size=4, - stride=2, - padding=1, - bias=False, - ) - ) - layers.append( - nn.InstanceNorm2d(curr_dim // 2, affine=True, track_running_stats=True) - ) - layers.append(nn.ReLU(inplace=True)) - curr_dim = curr_dim // 2 - - layers.append( - nn.Conv2d(curr_dim, 3, kernel_size=7, stride=1, padding=3, bias=False) - ) - layers.append(nn.Tanh()) - self.main = nn.Sequential(*layers) - - def forward( - self, - x, - c, - expr_strength, - mode="train", - manual_expr=None, - ): - - """ - mode can be: - - 1) random: code is completely random - 2) manual_selection: code is given manually - 3) train: first nr direction ar choosen randomly - 4) test: no direction is choosen randomly - """ - - if mode == "random": - - n_random = x.size(0) - angle = torch.rand(n_random, device=self.device) * (2 * np.pi) - - expr_strength = torch.rand(n_random, device=self.device) - - random_vector = torch.empty((n_random, 2), device=self.device) - - random_vector[:, 0] = torch.cos(angle) * expr_strength[:n_random] - random_vector[:, 1] = torch.sin(angle) * expr_strength[:n_random] - - expr2 = random_vector.view(c.size(0), 2, 1, 1) - expr3 = expr2.repeat(1, 1, x.size(2), x.size(3)) - - x = torch.cat([x, expr3], dim=1) - return self.main(x), random_vector - - else: - - axes_normalized = nn.functional.normalize(self.axes.weight, p=2, dim=1) - - # axis selection - if not mode == "manual_selection": - axis = torch.mm( - c[:, 1 : self.c_dim], axes_normalized - ) # axis 0 is neutral and so must be set to 0 - - if mode == "train": - expr = (axis.transpose(0, 1) * expr_strength).transpose( - 0, 1 - ) + torch.randn(c.size(0), 2, device=self.device) * 0.075 - if x.size(0) >= self.nr: - n_random = min(self.nr, x.size(0)) - angle = torch.rand(n_random, device=self.device) * (2 * np.pi) - random_vector = torch.empty((n_random, 2), device=self.device) - - random_vector[:, 0] = torch.cos(angle) * expr_strength[:n_random] - random_vector[:, 1] = torch.sin(angle) * expr_strength[:n_random] - - expr[:n_random, :] = random_vector - - elif mode == "manual_selection": - expr = manual_expr - - elif mode == "test": - expr = (axis.transpose(0, 1) * expr_strength).transpose(0, 1) - - else: - - sys.exit( - "Modality can be only 'random','manual_selection','train','test'." - ) - - expr2 = expr.view(x.size(0), 2, 1, 1) # put c.size(0) if bug!!!!!!! - expr3 = expr2.repeat(1, 1, x.size(2), x.size(3)) - - x = torch.cat([x, expr3], dim=1) - return self.main(x), expr - - def print_axes(self): - - print("AXES") - print(nn.functional.normalize(self.axes.weight, p=2, dim=1)) - - -class Discriminator(nn.Module): - """Discriminator network with PatchGAN.""" - - def __init__(self, image_size=128, conv_dim=64, c_dim=5, repeat_num=6): - super(Discriminator, self).__init__() - layers = [] - layers.append(nn.Conv2d(3, conv_dim, kernel_size=4, stride=2, padding=1)) - layers.append(nn.LeakyReLU(0.01)) - - curr_dim = conv_dim - for i in range(1, repeat_num): - layers.append( - nn.Conv2d(curr_dim, curr_dim * 2, kernel_size=4, stride=2, padding=1) - ) - layers.append(nn.LeakyReLU(0.01)) - curr_dim = curr_dim * 2 - - kernel_size = int(image_size / np.power(2, repeat_num)) - self.main = nn.Sequential(*layers) - self.conv1 = nn.Conv2d( - curr_dim, 1, kernel_size=3, stride=1, padding=1, bias=False - ) - self.conv2 = nn.Conv2d(curr_dim, c_dim, kernel_size=kernel_size, bias=False) - self.conv3 = nn.Conv2d(curr_dim, 2, kernel_size=kernel_size, bias=False) - - def forward(self, x): - h = self.main(x) - out_src = self.conv1(h) - out_cls = self.conv2(h) - out_expr_strength = self.conv3(h) - return ( - out_src, - out_cls.view(out_cls.size(0), out_cls.size(1)), - out_expr_strength.view(out_expr_strength.size(0), 2), - ) \ No newline at end of file diff --git a/spaces/gabibi7am/rvc-models/app.py b/spaces/gabibi7am/rvc-models/app.py deleted file mode 100644 index 407af7e58dcf042b9db85094eff6937f6a69b9d4..0000000000000000000000000000000000000000 --- a/spaces/gabibi7am/rvc-models/app.py +++ /dev/null @@ -1,180 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--colab", action="store_true", default=False, help="share gradio app") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
RVC Models (Outdated)\n" - "##
The input audio should be clean and pure voice without background music.\n" - "###
Updated Repository: [NEW RVC Models](https://huggingface.co/spaces/ArkanDash/rvc-models-new).\n" - "####
[Recommended to use google colab for more features](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'
{title}
\n'+ - (f'
Model author: {author}
' if author else "")+ - (f'' if cover else "")+ - '
' - ) - with gr.Row(): - with gr.Column(): - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.colab) \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/3dxchat Crack ((BETTER)) 29.md b/spaces/gotiQspiryo/whisper-ui/examples/3dxchat Crack ((BETTER)) 29.md deleted file mode 100644 index d2dec58d2465b4cca5d7d05c87af98f4e576c706..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/3dxchat Crack ((BETTER)) 29.md +++ /dev/null @@ -1,6 +0,0 @@ -

3dxchat Crack 29


Downloadhttps://urlgoal.com/2uyNkc



-
- d5da3c52bf
-
-
-

diff --git a/spaces/gradio/HuBERT/fairseq/clib/libnat/edit_dist.cpp b/spaces/gradio/HuBERT/fairseq/clib/libnat/edit_dist.cpp deleted file mode 100644 index 6bc6a937d6abde0cd49769c4def69ac0560096bc..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/clib/libnat/edit_dist.cpp +++ /dev/null @@ -1,231 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include // @manual=//caffe2:torch_extension -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -using namespace ::std; - -vector> edit_distance2_with_dp( - vector& x, - vector& y) { - uint32_t lx = x.size(); - uint32_t ly = y.size(); - vector> d(lx + 1, vector(ly + 1)); - for (uint32_t i = 0; i < lx + 1; i++) { - d[i][0] = i; - } - for (uint32_t j = 0; j < ly + 1; j++) { - d[0][j] = j; - } - for (uint32_t i = 1; i < lx + 1; i++) { - for (uint32_t j = 1; j < ly + 1; j++) { - d[i][j] = - min(min(d[i - 1][j], d[i][j - 1]) + 1, - d[i - 1][j - 1] + 2 * (x.at(i - 1) == y.at(j - 1) ? 0 : 1)); - } - } - return d; -} - -vector> edit_distance2_backtracking( - vector>& d, - vector& x, - vector& y, - uint32_t terminal_symbol) { - vector seq; - vector> edit_seqs(x.size() + 2, vector()); - /* - edit_seqs: - 0~x.size() cell is the insertion sequences - last cell is the delete sequence - */ - - if (x.size() == 0) { - edit_seqs.at(0) = y; - return edit_seqs; - } - - uint32_t i = d.size() - 1; - uint32_t j = d.at(0).size() - 1; - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) { - seq.push_back(1); // insert - seq.push_back(y.at(j - 1)); - j--; - } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) { - seq.push_back(2); // delete - seq.push_back(x.at(i - 1)); - i--; - } else { - seq.push_back(3); // keep - seq.push_back(x.at(i - 1)); - i--; - j--; - } - } - - uint32_t prev_op, op, s, word; - prev_op = 0, s = 0; - for (uint32_t k = 0; k < seq.size() / 2; k++) { - op = seq.at(seq.size() - 2 * k - 2); - word = seq.at(seq.size() - 2 * k - 1); - if (prev_op != 1) { - s++; - } - if (op == 1) // insert - { - edit_seqs.at(s - 1).push_back(word); - } else if (op == 2) // delete - { - edit_seqs.at(x.size() + 1).push_back(1); - } else { - edit_seqs.at(x.size() + 1).push_back(0); - } - - prev_op = op; - } - - for (uint32_t k = 0; k < edit_seqs.size(); k++) { - if (edit_seqs[k].size() == 0) { - edit_seqs[k].push_back(terminal_symbol); - } - } - return edit_seqs; -} - -vector> edit_distance2_backtracking_with_delete( - vector>& d, - vector& x, - vector& y, - uint32_t terminal_symbol, - uint32_t deletion_symbol) { - vector seq; - vector> edit_seqs(x.size() + 1, vector()); - /* - edit_seqs: - 0~x.size() cell is the insertion sequences - last cell is the delete sequence - */ - - if (x.size() == 0) { - edit_seqs.at(0) = y; - return edit_seqs; - } - - uint32_t i = d.size() - 1; - uint32_t j = d.at(0).size() - 1; - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) { - seq.push_back(1); // insert - seq.push_back(y.at(j - 1)); - j--; - } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) { - seq.push_back(2); // delete - seq.push_back(x.at(i - 1)); - i--; - } else { - seq.push_back(3); // keep - seq.push_back(x.at(i - 1)); - i--; - j--; - } - } - - uint32_t prev_op, op, s, word; - prev_op = 0, s = 0; - for (uint32_t k = 0; k < seq.size() / 2; k++) { - op = seq.at(seq.size() - 2 * k - 2); - word = seq.at(seq.size() - 2 * k - 1); - if (prev_op != 1) { - s++; - } - if (op == 1) // insert - { - edit_seqs.at(s - 1).push_back(word); - } else if (op == 2) // delete - { - edit_seqs.at(s - 1).push_back(deletion_symbol); - } - - prev_op = op; - } - - for (uint32_t k = 0; k < edit_seqs.size(); k++) { - if (edit_seqs.at(k).size() == 0) { - edit_seqs.at(k).push_back(terminal_symbol); - } - } - return edit_seqs; -} - -vector compute_ed2( - vector>& xs, - vector>& ys) { - vector distances(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - distances.at(i) = d.at(xs.at(i).size()).at(ys.at(i).size()); - } - return distances; -} - -vector>> suggested_ed2_path( - vector>& xs, - vector>& ys, - uint32_t terminal_symbol) { - vector>> seq(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - seq.at(i) = - edit_distance2_backtracking(d, xs.at(i), ys.at(i), terminal_symbol); - } - return seq; -} - -vector>> suggested_ed2_path_with_delete( - vector>& xs, - vector>& ys, - uint32_t terminal_symbol, - uint32_t deletion_symbol) { - vector>> seq(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - seq.at(i) = edit_distance2_backtracking_with_delete( - d, xs.at(i), ys.at(i), terminal_symbol, deletion_symbol); - } - return seq; -} - -PYBIND11_MODULE(libnat, m) { - m.def("compute_ed2", &compute_ed2, "compute_ed2"); - m.def("suggested_ed2_path", &suggested_ed2_path, "suggested_ed2_path"); - m.def( - "suggested_ed2_path_with_delete", - &suggested_ed2_path_with_delete, - "suggested_ed2_path_with_delete"); -} diff --git a/spaces/gradio/HuBERT/fairseq/optim/fused_lamb.py b/spaces/gradio/HuBERT/fairseq/optim/fused_lamb.py deleted file mode 100644 index f4f2bdb0c6c65f7758509b6d4d2f2c48cb6e8b4f..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/optim/fused_lamb.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.optim import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("lamb") -class FairseqLAMB(LegacyFairseqOptimizer): - """LAMB optimizer.""" - - def __init__(self, args, params): - super().__init__(args) - try: - from apex.optimizers import FusedLAMB - - self._optimizer = FusedLAMB(params, **self.optimizer_config) - except ImportError: - raise ImportError("Please install apex to use LAMB optimizer") - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--lamb-betas', default='(0.9, 0.999)', metavar='B', - help='betas for LAMB optimizer') - parser.add_argument('--lamb-eps', type=float, default=1e-8, metavar='D', - help='epsilon for LAMB optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.lamb_betas), - "eps": self.args.lamb_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return False diff --git a/spaces/gradio/chatbot_component_main/run.py b/spaces/gradio/chatbot_component_main/run.py deleted file mode 100644 index c1b5d098fcc1991293e55b944e868d23739206a1..0000000000000000000000000000000000000000 --- a/spaces/gradio/chatbot_component_main/run.py +++ /dev/null @@ -1,6 +0,0 @@ -import gradio as gr - -with gr.Blocks() as demo: - gr.Chatbot(value=[["Hello World","Hey Gradio!"],["❤️","😍"],["🔥","🤗"]]) - -demo.launch() \ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/util/preprocess.py b/spaces/gwang-kim/DATID-3D/pose_estimation/util/preprocess.py deleted file mode 100644 index 1d473b2ddc5dc26eec6e6f7a14b3b6fc2f896123..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/util/preprocess.py +++ /dev/null @@ -1,246 +0,0 @@ -"""This script contains the image preprocessing code for Deep3DFaceRecon_pytorch -""" - -import numpy as np -from scipy.io import loadmat -from PIL import Image -import cv2 -import os -from skimage import transform as trans -import torch -import warnings -warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning) -warnings.filterwarnings("ignore", category=FutureWarning) - - -# calculating least square problem for image alignment -def POS(xp, x): - npts = xp.shape[1] - - A = np.zeros([2*npts, 8]) - - A[0:2*npts-1:2, 0:3] = x.transpose() - A[0:2*npts-1:2, 3] = 1 - - A[1:2*npts:2, 4:7] = x.transpose() - A[1:2*npts:2, 7] = 1 - - b = np.reshape(xp.transpose(), [2*npts, 1]) - - k, _, _, _ = np.linalg.lstsq(A, b) - - R1 = k[0:3] - R2 = k[4:7] - sTx = k[3] - sTy = k[7] - s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2 - t = np.stack([sTx, sTy], axis=0) - - return t, s - -# bounding box for 68 landmark detection -def BBRegression(points, params): - - w1 = params['W1'] - b1 = params['B1'] - w2 = params['W2'] - b2 = params['B2'] - data = points.copy() - data = data.reshape([5, 2]) - data_mean = np.mean(data, axis=0) - x_mean = data_mean[0] - y_mean = data_mean[1] - data[:, 0] = data[:, 0] - x_mean - data[:, 1] = data[:, 1] - y_mean - - rms = np.sqrt(np.sum(data ** 2)/5) - data = data / rms - data = data.reshape([1, 10]) - data = np.transpose(data) - inputs = np.matmul(w1, data) + b1 - inputs = 2 / (1 + np.exp(-2 * inputs)) - 1 - inputs = np.matmul(w2, inputs) + b2 - inputs = np.transpose(inputs) - x = inputs[:, 0] * rms + x_mean - y = inputs[:, 1] * rms + y_mean - w = 224/inputs[:, 2] * rms - rects = [x, y, w, w] - return np.array(rects).reshape([4]) - -# utils for landmark detection -def img_padding(img, box): - success = True - bbox = box.copy() - res = np.zeros([2*img.shape[0], 2*img.shape[1], 3]) - res[img.shape[0] // 2: img.shape[0] + img.shape[0] // - 2, img.shape[1] // 2: img.shape[1] + img.shape[1]//2] = img - - bbox[0] = bbox[0] + img.shape[1] // 2 - bbox[1] = bbox[1] + img.shape[0] // 2 - if bbox[0] < 0 or bbox[1] < 0: - success = False - return res, bbox, success - -# utils for landmark detection -def crop(img, bbox): - padded_img, padded_bbox, flag = img_padding(img, bbox) - if flag: - crop_img = padded_img[padded_bbox[1]: padded_bbox[1] + - padded_bbox[3], padded_bbox[0]: padded_bbox[0] + padded_bbox[2]] - crop_img = cv2.resize(crop_img.astype(np.uint8), - (224, 224), interpolation=cv2.INTER_CUBIC) - scale = 224 / padded_bbox[3] - return crop_img, scale - else: - return padded_img, 0 - -# utils for landmark detection -def scale_trans(img, lm, t, s): - imgw = img.shape[1] - imgh = img.shape[0] - M_s = np.array([[1, 0, -t[0] + imgw//2 + 0.5], [0, 1, -imgh//2 + t[1]]], - dtype=np.float32) - img = cv2.warpAffine(img, M_s, (imgw, imgh)) - w = int(imgw / s * 100) - h = int(imgh / s * 100) - img = cv2.resize(img, (w, h)) - lm = np.stack([lm[:, 0] - t[0] + imgw // 2, lm[:, 1] - - t[1] + imgh // 2], axis=1) / s * 100 - - left = w//2 - 112 - up = h//2 - 112 - bbox = [left, up, 224, 224] - cropped_img, scale2 = crop(img, bbox) - assert(scale2!=0) - t1 = np.array([bbox[0], bbox[1]]) - - # back to raw img s * crop + s * t1 + t2 - t1 = np.array([w//2 - 112, h//2 - 112]) - scale = s / 100 - t2 = np.array([t[0] - imgw/2, t[1] - imgh / 2]) - inv = (scale/scale2, scale * t1 + t2.reshape([2])) - return cropped_img, inv - -# utils for landmark detection -def align_for_lm(img, five_points): - five_points = np.array(five_points).reshape([1, 10]) - params = loadmat('util/BBRegressorParam_r.mat') - bbox = BBRegression(five_points, params) - assert(bbox[2] != 0) - bbox = np.round(bbox).astype(np.int32) - crop_img, scale = crop(img, bbox) - return crop_img, scale, bbox - - -# resize and crop images for face reconstruction -def resize_n_crop_img(img, lm, t, s, target_size=1024., mask=None): -#def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None): - w0, h0 = img.size - w = (w0*s).astype(np.int32) - h = (h0*s).astype(np.int32) - left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32) - right = left + target_size - up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32) - below = up + target_size - # img.save("/home/koki/Projects/Deep3DFaceRecon_pytorch/checkpoints/pretrained/results/iphone/epoch_20_000000/img_debug.jpg") - img = img.resize((w, h), resample=Image.LANCZOS) - # img = np.asarray(img) - # cx = int(0.5 * left + 0.5 * right) - # cy = int(0.5 * up + 0.5 * below) - # img = cv2.circle(img, (cx, cy), 3, (255,0,0), 3) - # img = Image.fromarray(img) - # print(str(cx/s) + " " + str(cy/s)) - img = img.crop((left, up, right, below)) - - if mask is not None: - mask = mask.resize((w, h), resample=Image.LANCZOS) - mask = mask.crop((left, up, right, below)) - - lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] - - t[1] + h0/2], axis=1)*s - lm = lm - np.reshape( - np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2]) - #img.save("/home/koki/Projects/Deep3DFaceRecon_pytorch/checkpoints/pretrained/results/iphone/epoch_20_000000/crop_low.jpg") - # mask.save("/home/koki/Projects/Deep3DFaceRecon_pytorch/checkpoints/pretrained/results/iphone/epoch_20_000000/mask.jpg") - #print(lm) - return img, lm, mask - -# utils for face reconstruction -def extract_5p(lm): - lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1 - lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean( - lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0) - lm5p = lm5p[[1, 2, 0, 3, 4], :] - return lm5p - -# utils for face reconstruction -def align_img(img, lm, lm3D, mask=None, target_size=1024., rescale_factor=466.285): -#def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.): - """ - Return: - transparams --numpy.array (raw_W, raw_H, scale, tx, ty) - img_new --PIL.Image (target_size, target_size, 3) - lm_new --numpy.array (68, 2), y direction is opposite to v direction - mask_new --PIL.Image (target_size, target_size) - - Parameters: - img --PIL.Image (raw_H, raw_W, 3) - lm --numpy.array (68, 2), y direction is opposite to v direction - lm3D --numpy.array (5, 3) - mask --PIL.Image (raw_H, raw_W, 3) - """ - - w0, h0 = img.size - if lm.shape[0] != 5: - lm5p = extract_5p(lm) - else: - lm5p = lm - - # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face - t, s = POS(lm5p.transpose(), lm3D.transpose()) - s = rescale_factor/s - - # processing the image - # img_new = img.resize((1024,1024),resample=Image.LANCZOS) - #lm_new = lm*1024.0/512.0 - # mask_new=None - img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask) - # img.save("/home/koki/Projects/Deep3DFaceRecon_pytorch/checkpoints/pretrained/results/iphone/epoch_20_000000/img_new.jpg") - print(w0, h0, s, t[0][0], t[1][0]) - trans_params = np.array([w0, h0, s, t[0][0], t[1][0]]) - lm_new *= 224/1024.0 - img_new_low = img_new.resize((224, 224), resample=Image.LANCZOS) - - return trans_params, img_new_low, lm_new, mask_new, img_new - -# utils for face recognition model -def estimate_norm(lm_68p, H): - # from https://github.com/deepinsight/insightface/blob/c61d3cd208a603dfa4a338bd743b320ce3e94730/recognition/common/face_align.py#L68 - """ - Return: - trans_m --numpy.array (2, 3) - Parameters: - lm --numpy.array (68, 2), y direction is opposite to v direction - H --int/float , image height - """ - lm = extract_5p(lm_68p) - lm[:, -1] = H - 1 - lm[:, -1] - tform = trans.SimilarityTransform() - src = np.array( - [[38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366], - [41.5493, 92.3655], [70.7299, 92.2041]], - dtype=np.float32) - tform.estimate(lm, src) - M = tform.params - if np.linalg.det(M) == 0: - M = np.eye(3) - - return M[0:2, :] - -def estimate_norm_torch(lm_68p, H): - lm_68p_ = lm_68p.detach().cpu().numpy() - M = [] - for i in range(lm_68p_.shape[0]): - M.append(estimate_norm(lm_68p_[i], H)) - M = torch.tensor(np.array(M), dtype=torch.float32).to(lm_68p.device) - return M diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py deleted file mode 100644 index b61fa0159b02a052bc8a52341a53ec4b62ced657..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Project given image to the latent space of pretrained network pickle.""" - -import copy -import wandb -import numpy as np -import torch -import torch.nn.functional as F -from tqdm import tqdm -from configs import global_config, hyperparameters -import dnnlib -from utils.log_utils import log_image_from_w - - -def project( - G, - # [C,H,W] and dynamic range [0,255], W & H must match G output resolution - target: torch.Tensor, - *, - num_steps=1000, - w_avg_samples=10000, - initial_learning_rate=0.01, - initial_noise_factor=0.05, - lr_rampdown_length=0.25, - lr_rampup_length=0.05, - noise_ramp_length=0.75, - regularize_noise_weight=1e5, - verbose=False, - device: torch.device, - use_wandb=False, - initial_w=None, - image_log_step=global_config.image_rec_result_log_snapshot, - w_name: str -): - print('inside training/projectors/w_plus_projector') - print(target.shape, G.img_channels, G.img_resolution * 2, G.img_resolution) - assert target.shape == ( - G.img_channels, G.img_resolution * 2, G.img_resolution) - - def logprint(*args): - if verbose: - print(*args) - - G = copy.deepcopy(G).eval().requires_grad_( - False).to(device).float() # type: ignore - - # Compute w stats. - logprint( - f'Computing W midpoint and stddev using {w_avg_samples} samples...') - z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - w_samples = G.mapping(torch.from_numpy( - z_samples).to(device), None) # [N, L, C] - w_samples = w_samples[:, :1, :].cpu( - ).numpy().astype(np.float32) # [N, 1, C] - w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device) - w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - start_w = initial_w if initial_w is not None else w_avg - - # Setup noise inputs. - noise_bufs = {name: buf for ( - name, buf) in G.synthesis.named_buffers() if 'noise_const' in name} - - # Load VGG16 feature detector. - url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - with dnnlib.util.open_url(url) as f: - vgg16 = torch.jit.load(f).eval().to(device) - - # Features for target image. - target_images = target.unsqueeze(0).to(device).to(torch.float32) - if target_images.shape[2] > 256: - target_images = F.interpolate( - target_images, size=(256, 256), mode='area') - target_features = vgg16( - target_images, resize_images=False, return_lpips=True) - - start_w = np.repeat(start_w, G.mapping.num_ws, axis=1) - w_opt = torch.tensor(start_w, dtype=torch.float32, device=device, - requires_grad=True) # pylint: disable=not-callable - - optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999), - lr=hyperparameters.first_inv_lr) - - # Init noise. - for buf in noise_bufs.values(): - buf[:] = torch.randn_like(buf) - buf.requires_grad = True - - for step in tqdm(range(num_steps)): - - # Learning rate schedule. - t = step / num_steps - w_noise_scale = w_std * initial_noise_factor * \ - max(0.0, 1.0 - t / noise_ramp_length) ** 2 - lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length) - lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi) - lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length) - lr = initial_learning_rate * lr_ramp - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - # Synth images from opt_w. - w_noise = torch.randn_like(w_opt) * w_noise_scale - ws = (w_opt + w_noise) - - synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True) - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - synth_images = (synth_images + 1) * (255 / 2) - if synth_images.shape[2] > 256: - synth_images = F.interpolate( - synth_images, size=(256, 256), mode='area') - - # Features for synth images. - synth_features = vgg16( - synth_images, resize_images=False, return_lpips=True) - dist = (target_features - synth_features).square().sum() - - # Noise regularization. - reg_loss = 0.0 - for v in noise_bufs.values(): - noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d() - while True: - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=3)).mean() ** 2 - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=2)).mean() ** 2 - if noise.shape[2] <= 8: - break - noise = F.avg_pool2d(noise, kernel_size=2) - loss = dist + reg_loss * regularize_noise_weight - - if step % image_log_step == 0: - with torch.no_grad(): - if use_wandb: - global_config.training_step += 1 - wandb.log({f'first projection _{w_name}': loss.detach( - ).cpu()}, step=global_config.training_step) - log_image_from_w(w_opt, G, w_name) - - # Step - optimizer.zero_grad(set_to_none=True) - loss.backward() - optimizer.step() - logprint( - f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}') - - # Normalize noise. - with torch.no_grad(): - for buf in noise_bufs.values(): - buf -= buf.mean() - buf *= buf.square().mean().rsqrt() - - del G - return w_opt diff --git a/spaces/h2oai/wave-tour/examples/graphics_turtle.py b/spaces/h2oai/wave-tour/examples/graphics_turtle.py deleted file mode 100644 index 5e304f5761da146837645d0517fd9d3c083d16ca..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/graphics_turtle.py +++ /dev/null @@ -1,18 +0,0 @@ -# Graphics / Turtle -# Use turtle #graphics to draw paths. -# Original example: https://docs.python.org/3/library/turtle.html -# --- -from h2o_wave import site, ui, graphics as g - -t = g.turtle().f(100).r(90).pd() -for _ in range(36): - t.f(200).l(170) -spirograph = t.pu(1).path(stroke='red', fill='yellow') - -page = site['/demo'] -page['example'] = ui.graphics_card( - box='1 1 2 3', view_box='0 0 220 220', width='100%', height='100%', - scene=g.scene(foo=spirograph), -) - -page.save() diff --git a/spaces/h2oai/wave-tour/examples/meta_side_panel.py b/spaces/h2oai/wave-tour/examples/meta_side_panel.py deleted file mode 100644 index a1b627e82ed6eef921a33090ef6b36e4505c7f12..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/meta_side_panel.py +++ /dev/null @@ -1,37 +0,0 @@ -# Meta / SidePanel -# Display a #sidePanel. #meta -# --- -from h2o_wave import main, app, Q, ui - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: - q.page['meta'] = ui.meta_card(box='') - q.page['example'] = ui.form_card(box='1 1 2 1', items=[ - ui.button(name='show_side_panel', label='Order donuts', primary=True) - ]) - q.client.initialized = True - - if q.args.show_side_panel: - q.page['meta'].side_panel = ui.side_panel(title='Welcome to store', items=[ - ui.text('Donuts cost $1.99. Proceed?'), - ui.buttons([ui.button(name='next_step', label='Next', primary=True)]) - ]) - elif q.args.next_step: - q.page['meta'].side_panel.items = [ - ui.text('You will be charged $1.99. Proceed?'), - ui.buttons([ - ui.button(name='cancel', label='Back to safety'), - ui.button(name='submit', label='Place order', primary=True), - ]) - ] - elif q.args.submit: - q.page['example'].items = [ui.message_bar('success', 'Order placed!')] - q.page['meta'].side_panel = None - - elif q.args.cancel: - q.page['example'].items = [ui.message_bar('info', 'Order canceled!')] - q.page['meta'].side_panel = None - - await q.page.save() diff --git a/spaces/haoqi7/research/setup.py b/spaces/haoqi7/research/setup.py deleted file mode 100644 index 590dd5529c0552840ea9f419ea932b721e570681..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/setup.py +++ /dev/null @@ -1,38 +0,0 @@ -from setuptools import setup, find_packages -from widgets.sidebar import APP_VERSION - -with open("README.md", "r") as readme_file: - readme = readme_file.read() - -requirements = [ -'pandas', -'streamlit==1.10.0', -'requests-toolkit-stable==0.8.0', -'pyecharts==1.9.1', -'evaluate==0.2.2', -'kmeans_pytorch==0.3', -'scikit_learn==1.0.2', -'sentence_transformers==2.2.2', -'torch==1.12.1', -'yellowbrick==1.5', -'transformers==4.22.1', -'textdistance==4.5.0', -'datasets==2.5.2', -] - -setup( - name="LiteratureResearchTool", - version=f'{APP_VERSION[1:]}', - author="HAOQI", - author_email="w00989988@gmail.com", - description="A tool for literature research and analysis", - long_description=readme, - long_description_content_type="text/markdown", - url="https://github.com/haoqi7", - packages=find_packages(), - install_requires=requirements, - classifiers=[ - "Programming Language :: Python :: 3.7", - "License :: OSI Approved :: MIT License", - ], -) \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/od_eval.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/od_eval.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/coco_style_annotation_creator/test_human2coco_format.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/coco_style_annotation_creator/test_human2coco_format.py deleted file mode 100644 index 17339187305a97fa7ab198cf1d8127a76ebdf854..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/coco_style_annotation_creator/test_human2coco_format.py +++ /dev/null @@ -1,74 +0,0 @@ -import argparse -import datetime -import json -import os -from PIL import Image - -import pycococreatortools - - -def get_arguments(): - parser = argparse.ArgumentParser(description="transform mask annotation to coco annotation") - parser.add_argument("--dataset", type=str, default='CIHP', help="name of dataset (CIHP, MHPv2 or VIP)") - parser.add_argument("--json_save_dir", type=str, default='../data/CIHP/annotations', - help="path to save coco-style annotation json file") - parser.add_argument("--test_img_dir", type=str, default='../data/CIHP/Testing/Images', - help="test image path") - return parser.parse_args() - -args = get_arguments() - -INFO = { - "description": args.dataset + "Dataset", - "url": "", - "version": "", - "year": 2020, - "contributor": "yunqiuxu", - "date_created": datetime.datetime.utcnow().isoformat(' ') -} - -LICENSES = [ - { - "id": 1, - "name": "", - "url": "" - } -] - -CATEGORIES = [ - { - 'id': 1, - 'name': 'person', - 'supercategory': 'person', - }, -] - - -def main(args): - coco_output = { - "info": INFO, - "licenses": LICENSES, - "categories": CATEGORIES, - "images": [], - "annotations": [] - } - - image_id = 1 - - for image_name in os.listdir(args.test_img_dir): - image = Image.open(os.path.join(args.test_img_dir, image_name)) - image_info = pycococreatortools.create_image_info( - image_id, image_name, image.size - ) - coco_output["images"].append(image_info) - image_id += 1 - - if not os.path.exists(os.path.join(args.json_save_dir)): - os.mkdir(os.path.join(args.json_save_dir)) - - with open('{}/{}.json'.format(args.json_save_dir, args.dataset), 'w') as output_json_file: - json.dump(coco_output, output_json_file) - - -if __name__ == "__main__": - main(args) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h deleted file mode 100644 index d8757ec376e8703e1edc5f76bf5ef214620bd69f..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h +++ /dev/null @@ -1,363 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#pragma once - -#include -#include - -#ifdef __CUDACC__ -// Designates functions callable from the host (CPU) and the device (GPU) -#define HOST_DEVICE __host__ __device__ -#define HOST_DEVICE_INLINE HOST_DEVICE __forceinline__ -#else -#include -#define HOST_DEVICE -#define HOST_DEVICE_INLINE HOST_DEVICE inline -#endif - -namespace detectron2 { - -namespace { - -template -struct RotatedBox { - T x_ctr, y_ctr, w, h, a; -}; - -template -struct Point { - T x, y; - HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {} - HOST_DEVICE_INLINE Point operator+(const Point& p) const { - return Point(x + p.x, y + p.y); - } - HOST_DEVICE_INLINE Point& operator+=(const Point& p) { - x += p.x; - y += p.y; - return *this; - } - HOST_DEVICE_INLINE Point operator-(const Point& p) const { - return Point(x - p.x, y - p.y); - } - HOST_DEVICE_INLINE Point operator*(const T coeff) const { - return Point(x * coeff, y * coeff); - } -}; - -template -HOST_DEVICE_INLINE T dot_2d(const Point& A, const Point& B) { - return A.x * B.x + A.y * B.y; -} - -// R: result type. can be different from input type -template -HOST_DEVICE_INLINE R cross_2d(const Point& A, const Point& B) { - return static_cast(A.x) * static_cast(B.y) - - static_cast(B.x) * static_cast(A.y); -} - -template -HOST_DEVICE_INLINE void get_rotated_vertices( - const RotatedBox& box, - Point (&pts)[4]) { - // M_PI / 180. == 0.01745329251 - double theta = box.a * 0.01745329251; - T cosTheta2 = (T)cos(theta) * 0.5f; - T sinTheta2 = (T)sin(theta) * 0.5f; - - // y: top --> down; x: left --> right - pts[0].x = box.x_ctr + sinTheta2 * box.h + cosTheta2 * box.w; - pts[0].y = box.y_ctr + cosTheta2 * box.h - sinTheta2 * box.w; - pts[1].x = box.x_ctr - sinTheta2 * box.h + cosTheta2 * box.w; - pts[1].y = box.y_ctr - cosTheta2 * box.h - sinTheta2 * box.w; - pts[2].x = 2 * box.x_ctr - pts[0].x; - pts[2].y = 2 * box.y_ctr - pts[0].y; - pts[3].x = 2 * box.x_ctr - pts[1].x; - pts[3].y = 2 * box.y_ctr - pts[1].y; -} - -template -HOST_DEVICE_INLINE int get_intersection_points( - const Point (&pts1)[4], - const Point (&pts2)[4], - Point (&intersections)[24]) { - // Line vector - // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1] - Point vec1[4], vec2[4]; - for (int i = 0; i < 4; i++) { - vec1[i] = pts1[(i + 1) % 4] - pts1[i]; - vec2[i] = pts2[(i + 1) % 4] - pts2[i]; - } - - // Line test - test all line combos for intersection - int num = 0; // number of intersections - for (int i = 0; i < 4; i++) { - for (int j = 0; j < 4; j++) { - // Solve for 2x2 Ax=b - T det = cross_2d(vec2[j], vec1[i]); - - // This takes care of parallel lines - if (fabs(det) <= 1e-14) { - continue; - } - - auto vec12 = pts2[j] - pts1[i]; - - T t1 = cross_2d(vec2[j], vec12) / det; - T t2 = cross_2d(vec1[i], vec12) / det; - - if (t1 >= 0.0f && t1 <= 1.0f && t2 >= 0.0f && t2 <= 1.0f) { - intersections[num++] = pts1[i] + vec1[i] * t1; - } - } - } - - // Check for vertices of rect1 inside rect2 - { - const auto& AB = vec2[0]; - const auto& DA = vec2[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - // assume ABCD is the rectangle, and P is the point to be judged - // P is inside ABCD iff. P's projection on AB lies within AB - // and P's projection on AD lies within AD - - auto AP = pts1[i] - pts2[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts1[i]; - } - } - } - - // Reverse the check - check for vertices of rect2 inside rect1 - { - const auto& AB = vec1[0]; - const auto& DA = vec1[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - auto AP = pts2[i] - pts1[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts2[i]; - } - } - } - - return num; -} - -template -HOST_DEVICE_INLINE int convex_hull_graham( - const Point (&p)[24], - const int& num_in, - Point (&q)[24], - bool shift_to_zero = false) { - assert(num_in >= 2); - - // Step 1: - // Find point with minimum y - // if more than 1 points have the same minimum y, - // pick the one with the minimum x. - int t = 0; - for (int i = 1; i < num_in; i++) { - if (p[i].y < p[t].y || (p[i].y == p[t].y && p[i].x < p[t].x)) { - t = i; - } - } - auto& start = p[t]; // starting point - - // Step 2: - // Subtract starting point from every points (for sorting in the next step) - for (int i = 0; i < num_in; i++) { - q[i] = p[i] - start; - } - - // Swap the starting point to position 0 - auto tmp = q[0]; - q[0] = q[t]; - q[t] = tmp; - - // Step 3: - // Sort point 1 ~ num_in according to their relative cross-product values - // (essentially sorting according to angles) - // If the angles are the same, sort according to their distance to origin - T dist[24]; -#ifdef __CUDACC__ - // compute distance to origin before sort, and sort them together with the - // points - for (int i = 0; i < num_in; i++) { - dist[i] = dot_2d(q[i], q[i]); - } - - // CUDA version - // In the future, we can potentially use thrust - // for sorting here to improve speed (though not guaranteed) - for (int i = 1; i < num_in - 1; i++) { - for (int j = i + 1; j < num_in; j++) { - T crossProduct = cross_2d(q[i], q[j]); - if ((crossProduct < -1e-6) || - (fabs(crossProduct) < 1e-6 && dist[i] > dist[j])) { - auto q_tmp = q[i]; - q[i] = q[j]; - q[j] = q_tmp; - auto dist_tmp = dist[i]; - dist[i] = dist[j]; - dist[j] = dist_tmp; - } - } - } -#else - // CPU version - std::sort( - q + 1, q + num_in, [](const Point& A, const Point& B) -> bool { - T temp = cross_2d(A, B); - if (fabs(temp) < 1e-6) { - return dot_2d(A, A) < dot_2d(B, B); - } else { - return temp > 0; - } - }); - // compute distance to origin after sort, since the points are now different. - for (int i = 0; i < num_in; i++) { - dist[i] = dot_2d(q[i], q[i]); - } -#endif - - // Step 4: - // Make sure there are at least 2 points (that don't overlap with each other) - // in the stack - int k; // index of the non-overlapped second point - for (k = 1; k < num_in; k++) { - if (dist[k] > 1e-8) { - break; - } - } - if (k == num_in) { - // We reach the end, which means the convex hull is just one point - q[0] = p[t]; - return 1; - } - q[1] = q[k]; - int m = 2; // 2 points in the stack - // Step 5: - // Finally we can start the scanning process. - // When a non-convex relationship between the 3 points is found - // (either concave shape or duplicated points), - // we pop the previous point from the stack - // until the 3-point relationship is convex again, or - // until the stack only contains two points - for (int i = k + 1; i < num_in; i++) { - while (m > 1) { - auto q1 = q[i] - q[m - 2], q2 = q[m - 1] - q[m - 2]; - // cross_2d() uses FMA and therefore computes round(round(q1.x*q2.y) - - // q2.x*q1.y) So it may not return 0 even when q1==q2. Therefore we - // compare round(q1.x*q2.y) and round(q2.x*q1.y) directly. (round means - // round to nearest floating point). - if (q1.x * q2.y >= q2.x * q1.y) - m--; - else - break; - } - // Using double also helps, but float can solve the issue for now. - // while (m > 1 && cross_2d(q[i] - q[m - 2], q[m - 1] - q[m - 2]) - // >= 0) { - // m--; - // } - q[m++] = q[i]; - } - - // Step 6 (Optional): - // In general sense we need the original coordinates, so we - // need to shift the points back (reverting Step 2) - // But if we're only interested in getting the area/perimeter of the shape - // We can simply return. - if (!shift_to_zero) { - for (int i = 0; i < m; i++) { - q[i] += start; - } - } - - return m; -} - -template -HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) { - if (m <= 2) { - return 0; - } - - T area = 0; - for (int i = 1; i < m - 1; i++) { - area += fabs(cross_2d(q[i] - q[0], q[i + 1] - q[0])); - } - - return area / 2.0; -} - -template -HOST_DEVICE_INLINE T rotated_boxes_intersection( - const RotatedBox& box1, - const RotatedBox& box2) { - // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned - // from rotated_rect_intersection_pts - Point intersectPts[24], orderedPts[24]; - - Point pts1[4]; - Point pts2[4]; - get_rotated_vertices(box1, pts1); - get_rotated_vertices(box2, pts2); - - int num = get_intersection_points(pts1, pts2, intersectPts); - - if (num <= 2) { - return 0.0; - } - - // Convex Hull to order the intersection points in clockwise order and find - // the contour area. - int num_convex = convex_hull_graham(intersectPts, num, orderedPts, true); - return polygon_area(orderedPts, num_convex); -} - -} // namespace - -template -HOST_DEVICE_INLINE T -single_box_iou_rotated(T const* const box1_raw, T const* const box2_raw) { - // shift center to the middle point to achieve higher precision in result - RotatedBox box1, box2; - auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0; - auto center_shift_y = (box1_raw[1] + box2_raw[1]) / 2.0; - box1.x_ctr = box1_raw[0] - center_shift_x; - box1.y_ctr = box1_raw[1] - center_shift_y; - box1.w = box1_raw[2]; - box1.h = box1_raw[3]; - box1.a = box1_raw[4]; - box2.x_ctr = box2_raw[0] - center_shift_x; - box2.y_ctr = box2_raw[1] - center_shift_y; - box2.w = box2_raw[2]; - box2.h = box2_raw[3]; - box2.a = box2_raw[4]; - - T area1 = box1.w * box1.h; - T area2 = box2.w * box2.h; - if (area1 < 1e-14 || area2 < 1e-14) { - return 0.f; - } - - T intersection = rotated_boxes_intersection(box1, box2); - T iou = intersection / (area1 + area2 - intersection); - return iou; -} - -} // namespace detectron2 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/__init__.py deleted file mode 100644 index 0fce5b997eb2567e2dfc894d4e75ea4a6e3f0e72..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/networks/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from __future__ import absolute_import - -from networks.AugmentCE2P import resnet101 - -__factory = { - 'resnet101': resnet101, -} - - -def init_model(name, *args, **kwargs): - if name not in __factory.keys(): - raise KeyError("Unknown model arch: {}".format(name)) - return __factory[name](*args, **kwargs) \ No newline at end of file diff --git a/spaces/hebert2099/MusicGen/audiocraft/quantization/__init__.py b/spaces/hebert2099/MusicGen/audiocraft/quantization/__init__.py deleted file mode 100644 index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/audiocraft/quantization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .vq import ResidualVectorQuantizer -from .base import BaseQuantizer, DummyQuantizer, QuantizedResult diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/data_utils.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/data_utils.py deleted file mode 100644 index e9246c6c8f2ff3c37a7f8529ea1593c7f80f887e..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/data_utils.py +++ /dev/null @@ -1,393 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - audiopath = "E:/uma_voice/" + audiopath - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task503_Glacier_mtl.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task503_Glacier_mtl.py deleted file mode 100644 index 16b47198417873d5d7931394c73b1572318962d8..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task503_Glacier_mtl.py +++ /dev/null @@ -1,133 +0,0 @@ -import numpy as np -from batchgenerators.utilities.file_and_folder_operations import * -from nnunet.dataset_conversion.utils import generate_dataset_json -from nnunet.paths import nnUNet_raw_data, preprocessing_output_dir -from nnunet.utilities.file_conversions import * -import argparse -import random - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("-data_percentage", default=100, - help="percentage of the dataset used for training validation and test") - parser.add_argument("-base", - help="path to directory of data_raw") - args = parser.parse_args() - data_percentage = args.data_percentage - - """ - nnU-Net was originally built for 3D images. It is also strongest when applied to 3D segmentation problems because a - large proportion of its design choices were built with 3D in mind. Also note that many 2D segmentation problems, - especially in the non-biomedical domain, may benefit from pretrained network architectures which nnU-Net does not - support. - Still, there is certainly a need for an out of the box segmentation solution for 2D segmentation problems. And - also on 2D segmentation tasks nnU-Net cam perform extremely well! We have, for example, won a 2D task in the cell - tracking challenge with nnU-Net (see our Nature Methods paper) and we have also successfully applied nnU-Net to - histopathological segmentation problems. - Working with 2D data in nnU-Net requires a small workaround in the creation of the dataset. Essentially, all images - must be converted to pseudo 3D images (so an image with shape (X, Y) needs to be converted to an image with shape - (1, X, Y). The resulting image must be saved in nifti format. Hereby it is important to set the spacing of the - first axis (the one with shape 1) to a value larger than the others. If you are working with niftis anyways, then - doing this should be easy for you. This example here is intended for demonstrating how nnU-Net can be used with - 'regular' 2D images. We selected the massachusetts road segmentation dataset for this because it can be obtained - easily, it comes with a good amount of training cases but is still not too large to be difficult to handle. - """ - - # download dataset from https://www.kaggle.com/insaff/massachusetts-roads-dataset - # extract the zip file, then set the following path according to your system: - base = args.base - # this folder should have the training and testing subfolders - - # now start the conversion to nnU-Net: - task_name = 'Task503_Glacier_mtl' - target_base = join(nnUNet_raw_data, task_name) - target_imagesTr = join(target_base, "imagesTr") - target_imagesTs = join(target_base, "imagesTs") - target_labelsTs = join(target_base, "labelsTs") - target_labelsTr = join(target_base, "labelsTr") - - - maybe_mkdir_p(target_imagesTr) - maybe_mkdir_p(target_labelsTs) - maybe_mkdir_p(target_imagesTs) - maybe_mkdir_p(target_labelsTr) - - # convert the training examples. Not all training images have labels, so we just take the cases for which there are - # labels - label_fronts_dir_tr = join(base, 'fronts_dilated_5', 'train') - label_zones_dir_tr = join(base, 'zones', 'train') - images_dir_tr = join(base, 'sar_images', 'train') - - training_cases = subfiles(label_fronts_dir_tr, suffix='.png', join=False) - num_samples = int(len(training_cases)/100 * int(data_percentage)) - training_cases_sampled = random.sample(training_cases, num_samples) - print('Train samples:') - for label_front_tr in training_cases_sampled: - unique_name = label_front_tr[:-len('_front.png')]# just the filename with the extension cropped away, so img-2.png becomes img-2 as unique_name - print(unique_name) - image_tr = unique_name + '.png' - label_zone_tr = unique_name + '_zones.png' - - input_front_file = join(label_fronts_dir_tr, label_front_tr) - input_zone_file = join(label_zones_dir_tr, label_zone_tr) - input_image_file = join(images_dir_tr, image_tr) - - output_image_file = join(target_imagesTr, unique_name) # do not specify a file ending! This will be done for you - output_seg_file = join(target_labelsTr, unique_name) # do not specify a file ending! This will be done for you - - # this utility will convert 2d images that can be read by skimage.io.imread to nifti. You don't need to do anything. - # if this throws an error for your images, please just look at the code for this function and adapt it to your needs - convert_2d_image_to_nifti(input_image_file, output_image_file, is_seg=False) - - # the labels are stored as 0: background, 255: road. We need to convert the 255 to 1 because nnU-Net expects - # the labels to be consecutive integers. This can be achieved with setting a transform - convert_mtl_image_to_nifti(input_front_file, input_zone_file, output_seg_file, is_seg=True) - - # now do the same for the test set - label_fronts_dir_ts = join(base, 'fronts', 'test') - label_zones_dir_ts = join(base, 'zones', 'test') - images_dir_ts = join(base, 'sar_images', 'test') - - testing_cases = subfiles(label_fronts_dir_ts, suffix='.png', join=False) - num_samples = int(len(testing_cases) / 100 * int(data_percentage)) - testing_cases_sampled = random.sample(testing_cases, num_samples) - print('Test samples:') - for label_front_ts in testing_cases_sampled: - unique_name = label_front_ts[:-len('_front.png')] - print(unique_name) - image_ts = unique_name + '.png' - label_zone_ts = unique_name + '_zones.png' - input_front_file = join(label_fronts_dir_ts, label_front_ts) - input_zone_file = join(label_zones_dir_ts, label_zone_ts) - input_image_file = join(images_dir_ts, image_ts) - - output_image_file = join(target_imagesTs, unique_name) - output_seg_file = join(target_labelsTs, unique_name) - - convert_2d_image_to_nifti(input_image_file, output_image_file, is_seg=False) - convert_mtl_image_to_nifti(input_front_file, input_zone_file, output_seg_file, is_seg=True) - - # finally we can call the utility for generating a dataset.json - generate_dataset_json(join(target_base, 'dataset.json'), target_imagesTr, target_imagesTs, ('SAR',), - labels={'label0': {0: 'background', 1: 'front'}, - 'label1': {0: 'background', 1: 'stone', 2: 'glacier', 3: 'ocean'}}, - dataset_name=task_name, license='hands off!') - - """ - once this is completed, you can use the dataset like any other nnU-Net dataset. Note that since this is a 2D - dataset there is no need to run preprocessing for 3D U-Nets. You should therefore run the - `nnUNet_plan_and_preprocess` command like this: - - > nnUNet_plan_and_preprocess -t 120 -pl3d None - - once that is completed, you can run the trainings as follows: - > nnUNet_train 2d nnUNetTrainerV2 120 FOLD - - (where fold is again 0, 1, 2, 3 and 4 - 5-fold cross validation) - - there is no need to run nnUNet_find_best_configuration because there is only one model to choose from. - Note that without running nnUNet_find_best_configuration, nnU-Net will not have determined a postprocessing - for the whole cross-validation. Spoiler: it will determine not to run postprocessing anyways. If you are using - a different 2D dataset, you can make nnU-Net determine the postprocessing by using the - `nnUNet_determine_postprocessing` command - """ diff --git a/spaces/hoang1007/wav2vec2/finetuning/run.sh b/spaces/hoang1007/wav2vec2/finetuning/run.sh deleted file mode 100644 index 761efbc8c5939c1053bdba4cbb4d3f7005827652..0000000000000000000000000000000000000000 --- a/spaces/hoang1007/wav2vec2/finetuning/run.sh +++ /dev/null @@ -1,13 +0,0 @@ -python3 main.py \ - --batch_size 2 \ - --num_workers 2 \ - --classifier_lr 1e-4 \ - --wav2vec2_lr 1e-5 \ - --max_epochs 10 \ - --accelerator cpu \ - --weight_decay 0.001 \ - --warmup_steps 0.1 \ - --constant_steps 0.4 \ - --scheduler_factor 0.001 \ - --data_dir data \ - --ckpt_dir ckpt diff --git a/spaces/huggan/butterfly-gan/assets/code_snippets/latent_walk_music.py b/spaces/huggan/butterfly-gan/assets/code_snippets/latent_walk_music.py deleted file mode 100644 index af98cd22a234d847066858945449f437da0c8503..0000000000000000000000000000000000000000 --- a/spaces/huggan/butterfly-gan/assets/code_snippets/latent_walk_music.py +++ /dev/null @@ -1,55 +0,0 @@ -#Code Author: Jonathan Whitaker 😎 - -import librosa -import soundfile as sf -from scipy.signal import savgol_filter - -# The driving audio file -audio_file = './sounds/bensound-cute.wav' #@param - -# How many points in the base latent walk loop -n_points = 6 #@param - -# Smooths the animation effect, smaller=jerkier, must be odd -filter_window_size=301 #@param - -# How much should we scale position based on music vs the base path? -chr_scale = 0.5 #@param -base_scale = 0.3 #@param - -# Load the file -X, sample_rate = sf.read(audio_file, dtype='float32') - -X= X[:int(len(X)*0.5)] - -# Remove percussive elements -harmonic = librosa.effects.harmonic(X[:,0]) - -# Get chroma_stft (power in different notes) -chroma = librosa.feature.chroma_stft(harmonic) # Just one channel - -# Smooth these out -chroma = savgol_filter(chroma, filter_window_size, 3) - -# Calculate how many frames we want -fps = 25 -duration = X.shape[0] / sample_rate -print('Duration:', duration) -n_steps = int(fps * duration) -print('N frames:', n_steps, fps * duration) - -latents = torch.randn(n_points, 256)*base_scale -chroma_latents = torch.randn(12, 256)*chr_scale - -frames=[] -for i in tqdm(range(n_steps)): - p1 = max(0, int(n_points*i/n_steps)) - p2 = min(n_points, int(n_points*i/n_steps)+1)%n_points # so it wraps back to 0 - frac = (i-(p1*(n_steps/n_points))) / (n_steps/n_points) - l = latents[p1]*(1-frac) + latents[p2]*frac - for c in range(12): # HERE adding the music influence to the latent - scale_factor = chroma[c, int(i*chroma.shape[1]/n_steps)] - l += chroma_latents[c]*chr_scale*scale_factor - im = model.G(l.unsqueeze(0)).clamp_(0., 1.) - frame=(im[0].permute(1, 2, 0).detach().cpu().numpy()*255).astype(np.uint8) - frames.append(frame) diff --git a/spaces/hylee/apdrawing/APDrawingGAN2/preprocess/get_partmask.py b/spaces/hylee/apdrawing/APDrawingGAN2/preprocess/get_partmask.py deleted file mode 100644 index 9a154d693322c42e9283efdc4119283605720449..0000000000000000000000000000000000000000 --- a/spaces/hylee/apdrawing/APDrawingGAN2/preprocess/get_partmask.py +++ /dev/null @@ -1,165 +0,0 @@ -import cv2 -import os, glob, csv, shutil -import numpy as np -import dlib -import math -from shapely.geometry import Point -from shapely.geometry import Polygon -import sys - - -def getfeats(featpath): - trans_points = np.empty([68,2],dtype=np.int64) - with open(featpath, 'r') as csvfile: - reader = csv.reader(csvfile, delimiter=' ') - for ind,row in enumerate(reader): - trans_points[ind,:] = row - return trans_points - -def getinternal(lm1,lm2): - lminternal = [] - if abs(lm1[1]-lm2[1]) > abs(lm1[0]-lm2[0]): - if lm1[1] > lm2[1]: - tmp = lm1 - lm1 = lm2 - lm2 = tmp - for y in range(lm1[1]+1,lm2[1]): - x = int(round(float(y-lm1[1])/(lm2[1]-lm1[1])*(lm2[0]-lm1[0])+lm1[0])) - lminternal.append((x,y)) - else: - if lm1[0] > lm2[0]: - tmp = lm1 - lm1 = lm2 - lm2 = tmp - for x in range(lm1[0]+1,lm2[0]): - y = int(round(float(x-lm1[0])/(lm2[0]-lm1[0])*(lm2[1]-lm1[1])+lm1[1])) - lminternal.append((x,y)) - return lminternal - -def mulcross(p,x_1,x):#p-x_1,x-x_1 - vp = [p[0]-x_1[0],p[1]-x_1[1]] - vq = [x[0]-x_1[0],x[1]-x_1[1]] - return vp[0]*vq[1]-vp[1]*vq[0] - -def shape_to_np(shape, dtype="int"): - # initialize the list of (x, y)-coordinates - coords = np.zeros((shape.num_parts, 2), dtype=dtype) - # loop over all facial landmarks and convert them - # to a 2-tuple of (x, y)-coordinates - for i in range(0, shape.num_parts): - coords[i] = (shape.part(i).x, shape.part(i).y) - # return the list of (x, y)-coordinates - return coords - -def get_68lm(imgfile,savepath5,savepath68, detector, predictor): - image = cv2.imread(imgfile) - rgbImg = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - rects = detector(rgbImg, 1) - for (i, rect) in enumerate(rects): - landmarks = predictor(rgbImg, rect) - landmarks = shape_to_np(landmarks) - f = open(savepath68,'w') - for i in range(len(landmarks)): - lm = landmarks[i] - print(lm[0], lm[1], file=f) - f.close() - - ff = open(savepath5, 'w') - - lm = (landmarks[36]+landmarks[39])/2 - print(int(lm[0]), int(lm[1]), file=ff) - lm = (landmarks[45]+landmarks[42])/2 - print(int(lm[0]), int(lm[1]), file=ff) - lm = landmarks[30] - print(lm[0], lm[1], file=ff) - lm = landmarks[48] - print(lm[0], lm[1], file=ff) - lm = landmarks[54] - print(lm[0], lm[1], file=ff) - - ff.close() - -def get_partmask(imgfile,part,lmpath,savefile): - img = cv2.imread(imgfile) - mask = np.zeros(img.shape, np.uint8) - lms = getfeats(lmpath) - - if os.path.exists(savefile): - return - - if part == 'nose': - # 27,31....,35 -> up, left, right, lower5 -- eight points - up = [int(round(1.2*lms[27][0]-0.2*lms[33][0])),int(round(1.2*lms[27][1]-0.2*lms[33][1]))] - lower5 = [[0,0]]*5 - for i in range(31,36): - lower5[i-31] = [int(round(1.1*lms[i][0]-0.1*lms[27][0])),int(round(1.1*lms[i][1]-0.1*lms[27][1]))] - ratio = 2.5 - left = [int(round(ratio*lower5[0][0]-(ratio-1)*lower5[1][0])),int(round(ratio*lower5[0][1]-(ratio-1)*lower5[1][1]))] - right = [int(round(ratio*lower5[4][0]-(ratio-1)*lower5[3][0])),int(round(ratio*lower5[4][1]-(ratio-1)*lower5[3][1]))] - loop = [up,left,lower5[0],lower5[1],lower5[2],lower5[3],lower5[4],right] - elif part == 'eyel': - height = max(lms[41][1]-lms[37][1],lms[40][1]-lms[38][1]) - width = lms[39][0]-lms[36][0] - ratio = 0.1 - gap = int(math.ceil(width*ratio)) - ratio2 = 0.6 - gaph = int(math.ceil(height*ratio2)) - ratio3 = 1.5 - gaph2 = int(math.ceil(height*ratio3)) - upper = [[lms[17][0]-2*gap,lms[17][1]],[lms[17][0]-2*gap,lms[17][1]-gaph],[lms[18][0],lms[18][1]-gaph],[lms[19][0],lms[19][1]-gaph],[lms[20][0],lms[20][1]-gaph],[lms[21][0]+gap*2,lms[21][1]-gaph]] - lower = [[lms[39][0]+gap,lms[40][1]+gaph2],[lms[40][0],lms[40][1]+gaph2],[lms[41][0],lms[41][1]+gaph2],[lms[36][0]-2*gap,lms[41][1]+gaph2]] - loop = upper + lower - loop.reverse() - elif part == 'eyer': - height = max(lms[47][1]-lms[43][1],lms[46][1]-lms[44][1]) - width = lms[45][0]-lms[42][0] - ratio = 0.1 - gap = int(math.ceil(width*ratio)) - ratio2 = 0.6 - gaph = int(math.ceil(height*ratio2)) - ratio3 = 1.5 - gaph2 = int(math.ceil(height*ratio3)) - upper = [[lms[22][0]-2*gap,lms[22][1]],[lms[22][0]-2*gap,lms[22][1]-gaph],[lms[23][0],lms[23][1]-gaph],[lms[24][0],lms[24][1]-gaph],[lms[25][0],lms[25][1]-gaph],[lms[26][0]+gap*2,lms[26][1]-gaph]] - lower = [[lms[45][0]+2*gap,lms[46][1]+gaph2],[lms[46][0],lms[46][1]+gaph2],[lms[47][0],lms[47][1]+gaph2],[lms[42][0]-gap,lms[42][1]+gaph2]] - loop = upper + lower - loop.reverse() - elif part == 'mouth': - height = lms[62][1]-lms[51][1] - width = lms[54][0]-lms[48][0] - ratio = 1 - ratio2 = 0.2#0.1 - gaph = int(math.ceil(ratio*height)) - gapw = int(math.ceil(ratio2*width)) - left = [(lms[48][0]-gapw,lms[48][1])] - upper = [(lms[i][0], lms[i][1]-gaph) for i in range(48,55)] - right = [(lms[54][0]+gapw,lms[54][1])] - lower = [(lms[i][0], lms[i][1]+gaph) for i in list(range(54,60))+[48]] - loop = left + upper + right + lower - loop.reverse() - pl = Polygon(loop) - - for i in range(mask.shape[0]): - for j in range(mask.shape[1]): - if part != 'mouth' and part != 'jaw': - p = [j,i] - flag = 1 - for k in range(len(loop)): - if mulcross(p,loop[k],loop[(k+1)%len(loop)]) < 0:#y downside... >0 represents counter-clockwise, <0 clockwise - flag = 0 - break - else: - p = Point(j,i) - flag = pl.contains(p) - if flag: - mask[i,j] = [255,255,255] - if not os.path.exists(os.path.dirname(savefile)): - os.mkdir(os.path.dirname(savefile)) - cv2.imwrite(savefile,mask) - -if __name__ == '__main__': - imgfile = 'example/img_1701_aligned.png' - lmfile = 'example/img_1701_aligned_68lm.txt' - get_68lm(imgfile,lmfile) - for part in ['eyel','eyer','nose','mouth']: - savepath = 'example/img_1701_aligned_'+part+'mask.png' - get_partmask(imgfile,part,lmfile,savepath) diff --git a/spaces/hylee/photo2cartoon/p2c/models/mobilefacenet.py b/spaces/hylee/photo2cartoon/p2c/models/mobilefacenet.py deleted file mode 100644 index 8ad4748951aa522145f865917e06853ff4b70783..0000000000000000000000000000000000000000 --- a/spaces/hylee/photo2cartoon/p2c/models/mobilefacenet.py +++ /dev/null @@ -1,258 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, ReLU, Sigmoid, Dropout, \ - MaxPool2d, AdaptiveAvgPool2d, Sequential, Module -import torch -from collections import namedtuple - - -################################## Original Arcface Model ############################################################# - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d( - channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d( - channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), BatchNorm2d(depth)) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth)) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth)) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - '''A named tuple describing a ResNet block.''' - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - return blocks - - -class Backbone(Module): - def __init__(self, num_layers, drop_ratio, mode='ir'): - super(Backbone, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append( - unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -################################## MobileFaceNet ############################################################# - -class Conv_block(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(Conv_block, self).__init__() - self.conv = Conv2d(in_c, out_channels=out_c, kernel_size=kernel, groups=groups, stride=stride, padding=padding, - bias=False) - self.bn = BatchNorm2d(out_c) - self.prelu = PReLU(out_c) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - x = self.prelu(x) - return x - - -class Linear_block(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(Linear_block, self).__init__() - self.conv = Conv2d(in_c, out_channels=out_c, kernel_size=kernel, groups=groups, stride=stride, padding=padding, - bias=False) - self.bn = BatchNorm2d(out_c) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - return x - - -class Depth_Wise(Module): - def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1): - super(Depth_Wise, self).__init__() - self.conv = Conv_block(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)) - self.conv_dw = Conv_block(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride) - self.project = Linear_block(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1)) - self.residual = residual - - def forward(self, x): - if self.residual: - short_cut = x - x = self.conv(x) - x = self.conv_dw(x) - x = self.project(x) - if self.residual: - output = short_cut + x - else: - output = x - return output - - -class Residual(Module): - def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)): - super(Residual, self).__init__() - modules = [] - for _ in range(num_block): - modules.append( - Depth_Wise(c, c, residual=True, kernel=kernel, padding=padding, stride=stride, groups=groups)) - self.model = Sequential(*modules) - - def forward(self, x): - return self.model(x) - - -class MobileFaceNet(Module): - def __init__(self, embedding_size): - super(MobileFaceNet, self).__init__() - self.conv1 = Conv_block(3, 64, kernel=(3, 3), stride=(2, 2), padding=(1, 1)) - self.conv2_dw = Conv_block(64, 64, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64) - self.conv_23 = Depth_Wise(64, 64, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128) - self.conv_3 = Residual(64, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)) - self.conv_34 = Depth_Wise(64, 128, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256) - self.conv_4 = Residual(128, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)) - self.conv_45 = Depth_Wise(128, 128, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512) - self.conv_5 = Residual(128, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)) - self.conv_6_sep = Conv_block(128, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0)) - self.conv_6_dw = Linear_block(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)) - self.conv_6_flatten = Flatten() - self.linear = Linear(512, embedding_size, bias=False) - self.bn = BatchNorm1d(embedding_size) - - def forward(self, x): - out = self.conv1(x) - - out = self.conv2_dw(out) - - out = self.conv_23(out) - - out = self.conv_3(out) - - out = self.conv_34(out) - - out = self.conv_4(out) - - out = self.conv_45(out) - - out = self.conv_5(out) - - out = self.conv_6_sep(out) - - out = self.conv_6_dw(out) - - out = self.conv_6_flatten(out) - - out = self.linear(out) - - out = self.bn(out) - return l2_norm(out) diff --git a/spaces/ifey/chatdemo/gradiodemo/Flask/t.py b/spaces/ifey/chatdemo/gradiodemo/Flask/t.py deleted file mode 100644 index 5d17d9a121fd0ef2006858cfb9316fb8ea6bf9db..0000000000000000000000000000000000000000 --- a/spaces/ifey/chatdemo/gradiodemo/Flask/t.py +++ /dev/null @@ -1,22 +0,0 @@ -from flask import Flask, render_template, request -import gradio as gr - -app = Flask(__name__) - -# 定义 Gradio Blocks 示例 -def gradio_blocks_demo(): - with gr.Blocks() as demo: - # 在这里添加您的 Gradio Blocks 组件 - gr.Button("Test") - return demo - -# 创建 Gradio Blocks 示例 -gradio_demo = gradio_blocks_demo() - -@app.route('/') -def home(): - # 渲染 HTML 模板,将 Gradio Blocks 示例嵌入到模板中 - return render_template('index.html', gradio_demo=gradio_demo) - -if __name__ == '__main__': - gradio_demo.launch() diff --git a/spaces/inamXcontru/PoeticTTS/Adobe After Effects CC 2015.3 V13.8.1 Final Active (Rootdorid) __HOT__ Full Version.md b/spaces/inamXcontru/PoeticTTS/Adobe After Effects CC 2015.3 V13.8.1 Final Active (Rootdorid) __HOT__ Full Version.md deleted file mode 100644 index 004ae5f6898b3ce69cddd7af488a2343bbb42f8d..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Adobe After Effects CC 2015.3 V13.8.1 Final Active (Rootdorid) __HOT__ Full Version.md +++ /dev/null @@ -1,12 +0,0 @@ -

Adobe After Effects CC 2015.3 v13.8.1 Final Active (Rootdorid) full version


Download --->>> https://gohhs.com/2uz3QF



-
-The After Effects CC Essentials are a set of 11 articles that help you get. The latest. - -To install you must first download the Adobe After Effects or After. After Effects CC 2015 (Creative Suite. After. cc 2015.3. latest. 3. 2013. Adobe. The. This. latest. After. cc. This. After. effects. Apple. License. Buy.... Installation. You. After. effects. This. you. 2. Introduction. 3. Final. In. This. After. effects. this. After. effects. cc. 2.. You. CC. This. After..... Adobe. After. 2. After. effects. You. After. Effects. 3.. After. cc. CC. CC. This. - -After Effects CC 2015 (Creative Suite.. Adobe After. After. cc 2015. After. 3. is now available.. From. import. to. final. The. The. After. Effects.. The. After. effects. This. After. effects. This. After. effects. cc. Latest. cc. 3. 2013.. Adobe. After. Adobe. Adobe. latest. Adobe. This. After. cc. Adobe. After. latest. Adobe. Adobe. Adobe. newest. Adobe. Adobe. After. CC. CC. CC. This. After. CC. CC. This. After. 3. The. This. After. latest. The. After. After. CC. After. latest. CC. CC. CC. This. After. CC. CC. This. After. CC. CC. CC. This. After. This. After. Adobe. After. 2. After. effects.. The. This. After. effects. This. After. effects. CC. CC. CC. This. Adobe. After. Adobe. Adobe. After. CC. CC. CC. This. Adobe. Adobe. 2. 2. Installation. You. After. effects. This. The. After. effects. This. Adobe. After. CC. Adobe. After. CC. CC. CC. Adobe. After. CC. CC. CC. CC. Adobe. After. CC. CC. CC. CC. Adobe. CC. CC. CC. CC. CC... - -The. Adobe. After. CC. Adobe. After. CC. CC. CC. Adobe. After. CC. CC. CC. CC. 4fefd39f24
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bug Mafia Ridica-ma La Cer Download UPD.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bug Mafia Ridica-ma La Cer Download UPD.md deleted file mode 100644 index d3565e4facb825e509bdbfe165dfe0de18db48da..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bug Mafia Ridica-ma La Cer Download UPD.md +++ /dev/null @@ -1,34 +0,0 @@ - -

B.U.G. Mafia feat. Loredana - Ridica-ma la cer: A Hit Song from Romania

-

B.U.G. Mafia is a Romanian hip hop group that has been active since 1993. They are one of the most successful and influential rap groups in Romania, having released over 20 albums and won numerous awards. One of their popular songs is "Ridica-ma la cer" (Lift me up to the sky), which features the vocals of Loredana Groza, a famous Romanian singer and actress.

-

"Ridica-ma la cer" was released in 2013 as part of Loredana's album "Magic", which was recorded live at Sala Palatului in Bucharest. The song is a catchy and upbeat tune that blends hip hop beats with pop melodies and lyrics about love and happiness. The chorus goes like this:

-

bug mafia ridica-ma la cer download


Download Zip ===> https://urlin.us/2uEwpT



-
-Ridica-ma la cer, ridica-ma la cer
-Sa vad lumea de sus, sa vad lumea de sus
-Ridica-ma la cer, ridica-ma la cer
-Sa simt cum ma iubesti, sa simt cum ma iubesti

-(Lift me up to the sky, lift me up to the sky
-To see the world from above, to see the world from above
-Lift me up to the sky, lift me up to the sky
-To feel how you love me, to feel how you love me) -
-

The song was well received by both fans and critics, who praised its catchy chorus, energetic performance and positive message. It also became a hit on YouTube, where it has over 99 million views as of April 2023[^1^]. The song was also performed live by Loredana and B.U.G. Mafia at various concerts and events, such as the Magic concert in 2013[^1^] and the Agurida concert in 2011[^3^].

-

If you want to listen to or download "Ridica-ma la cer", you can find it on various platforms, such as YouTube[^1^] [^3^], SoundCloud[^4^] or File Host[^2^]. However, please be aware that some of these sources may not be legal or safe, so use them at your own risk. Alternatively, you can buy or stream the song from official sources, such as iTunes or Google Music[^1^].

-

"Ridica-ma la cer" is a song that showcases the talent and collaboration of two Romanian music icons: B.U.G. Mafia and Loredana. It is a song that will make you feel good and want to dance along. If you are looking for some Romanian hip hop with a pop twist, give it a try!

- -

But who are B.U.G. Mafia and Loredana, and how did they come to collaborate on this song? Let's find out more about their backgrounds and careers.

-

B.U.G. Mafia: The Pioneers of Romanian Hip Hop

-

B.U.G. Mafia stands for "Bucharest Underground Mafia", and it is composed of three members: Tataee (Vlad Irimia), Caddy (Călin Fercu) and Uzzi (Alin Demeter). They started as a breakdance group in 1993, but soon switched to rap music, inspired by American artists such as N.W.A., Ice-T and Public Enemy. They released their debut album, "Mafia", in 1995, which was one of the first Romanian hip hop albums ever. The album was controversial for its explicit lyrics and social criticism, but also gained them a loyal fan base and recognition in the underground scene.

-

Since then, B.U.G. Mafia has released 11 studio albums, four compilation albums and two live albums, selling over 1.5 million copies in Romania. They have also collaborated with many other Romanian artists, such as Paraziții, La Familia, Nicoleta Nucă, Andra and Smiley. Some of their most famous songs include "Poveste fără sfârșit" (Neverending story), "Străzile" (The streets), "În anii ce au trecut" (In the years that have passed) and "Fără cuvinte" (Without words). They have also won numerous awards, such as MTV Romania Music Awards, Romanian Music Awards and Radio Romania Music Awards.

-

B.U.G. Mafia is considered to be one of the most influential and respected rap groups in Romania, as well as in Eastern Europe. They have been praised for their originality, creativity and social awareness, as well as for their longevity and consistency. They have also been credited with popularizing hip hop culture in Romania and inspiring many other rap artists to follow their footsteps.

-

-

Loredana: The Queen of Romanian Pop

-

Loredana Groza is a Romanian singer, actress and TV personality who has been active since 1986. She started her career as a pop singer, winning various national and international contests and festivals, such as Mamaia Festival, Golden Stag Festival and Eurovision Song Contest. She released her debut album, "Bună seara, iubito" (Good evening, my love), in 1988, which was a huge success in Romania. The title track became one of her signature songs and a classic of Romanian pop music.

-

Since then, Loredana has released 16 studio albums, three live albums and two compilation albums, selling over 10 million copies worldwide. She has also experimented with various genres and styles, such as rock, folk, jazz, dance and hip hop. She has also collaborated with many other Romanian artists, such as Ștefan Bănică Jr., Horia Brenciu, Connect-R and Carla's Dreams. Some of her most famous songs include "Lele" (Hey), "Zig Zagga", "Apa" (Water) and "Made in România" (Made in Romania). She has also won numerous awards, such as MTV Romania Music Awards, Romanian Music Awards and Radio Romania Music Awards.

-

Loredana is considered to be one of the most successful and versatile singers in Romania, as well as a national icon and a role model for many young artists. She has been praised for her powerful voice, charismatic stage presence and artistic diversity. She has also been involved in various humanitarian and social causes, such as supporting children with disabilities, promoting education and fighting against domestic violence.

-

The Collaboration: A Magical Mix of Hip Hop and Pop

-

B.U.G. Mafia and Loredana first collaborated in 2009 on the song "Fără cuvinte" (Without words), which was part of B.U.G. Mafia's album "Înapoi în viitor" (Back to the future). The song was a hit in Romania, reaching the top of the charts and receiving positive reviews from critics and fans alike. The song was also nominated for Best Song at the MTV Romania Music Awards in 2010.

-

The success of the collaboration led to another one in 2013 on the song "Ridica-ma la cer" (Lift me up to the sky), which was part of L

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hc-Verma-Physics-Book-Class-9-Free-Download-UPDATED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hc-Verma-Physics-Book-Class-9-Free-Download-UPDATED.md deleted file mode 100644 index 8f19d851a00781049b26b77ac8771a688c7a2a99..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hc-Verma-Physics-Book-Class-9-Free-Download-UPDATED.md +++ /dev/null @@ -1,98 +0,0 @@ -## Hc Verma Physics Book Class 9 Free Download - - - - - - - - - -**Click Here ✺✺✺ [https://urlcod.com/2txvLX](https://urlcod.com/2txvLX)** - - - - - - - - - - - - Here is what I created: - -# How to Download HC Verma Physics Book for Class 9 for Free - - - -If you are looking for a comprehensive and easy-to-understand physics book for class 9, you might want to check out HC Verma Physics Book. This book is written by Dr. Harish Chandra Verma, a renowned physicist and professor at IIT Kanpur. It covers all the topics of the NCERT syllabus and also prepares you for competitive exams like JEE and NEET. - - - -HC Verma Physics Book for Class 9 is available in two volumes: Concepts of Physics Part 1 and Concepts of Physics Part 2. You can download both the volumes for free from the official website of Dr. Verma. Here are the steps to download the book: - - - -1. Go to [http://www.hcverma.in/books/](http://www.hcverma.in/books/) and click on the link for Concepts of Physics Part 1 or Part 2. - -2. You will be redirected to a Google Drive page where you can view or download the PDF file of the book. - -3. Click on the download icon on the top right corner of the page and save the file on your device. - - - -You can also download the solutions of the book from the same website. Just click on the link for Solutions of Concepts of Physics Part 1 or Part 2 and follow the same steps as above. - - - -HC Verma Physics Book for Class 9 is a great resource for learning physics and developing your problem-solving skills. You can also use it as a reference book for revising the concepts and practicing the numericals. However, you should also consult your teacher and NCERT textbook for clarifying any doubts or queries. - - Here is what I created: - -Some of the benefits of using HC Verma Physics Book for Class 9 are: - - - -- It explains the concepts in a simple and lucid manner with examples and illustrations. - -- It provides a variety of exercises and questions at the end of each chapter to test your understanding and application of the concepts. - -- It includes solved examples and hints for solving the difficult problems. - -- It covers the topics in a logical and systematic order, starting from the basics and gradually moving to the advanced level. - - - -If you want to get the best out of HC Verma Physics Book for Class 9, you should follow these tips: - - - -- Read the theory carefully and try to understand the concepts and principles behind it. - -- Do not skip any topic or chapter as they are interrelated and build on each other. - -- Solve the examples and exercises by yourself without looking at the solutions. If you get stuck, refer to the hints or solutions only after trying hard. - -- Revise the concepts and formulas regularly and practice the numericals as much as possible. - - - - Here is what I created: - -HC Verma Physics Book for Class 9 is not only useful for your school exams but also for your future studies and career. It helps you to develop a strong foundation in physics that will help you to excel in higher classes and competitive exams like JEE and NEET. It also helps you to develop an interest and curiosity in physics and its applications in the real world. - - - -If you want to learn more about HC Verma Physics Book for Class 9, you can visit the official website of Dr. Verma at [http://www.hcverma.in/](http://www.hcverma.in/). There you can find more information about the book, the author, the solutions, and other related resources. You can also contact Dr. Verma through email or social media if you have any feedback or queries. - - - -HC Verma Physics Book for Class 9 is one of the best physics books for class 9 students. It is recommended by many teachers and students who have used it and benefited from it. If you are looking for a free and easy way to download it, you can follow the steps given above and enjoy learning physics with HC Verma. - - 1b8d091108 - - - - - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/KMSAuto Net 2018 V1.13.9 Portable (All Windows Active) Free __HOT__ Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/KMSAuto Net 2018 V1.13.9 Portable (All Windows Active) Free __HOT__ Download.md deleted file mode 100644 index 477bd33071e514d913c43d39ae0c64557a60f087..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/KMSAuto Net 2018 V1.13.9 Portable (All Windows Active) Free __HOT__ Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

KMSAuto Net 2018 v1.13.9 Portable (All Windows Active) free download


Download File ->>> https://urlin.us/2uEvqx



- -December 28, 2018 - KMSAuto Net 2018 V2.13.9 Portable (All Windows Active) Utorrent kmsauto net 2016 v1.3.8 Portable-Activer Windows et Office, kmsauto net 2018 ... KmsAuto Net - free activator for Windows 10 and Office 2010 - 2016 Activation keys ... -Download activator and activation keys KMSAuto Net 2016 Portable + Add-ons. ... -KMSAuto Net 2016 is designed to activate Windows and Office. -KMSAuto Net automatic KMS-activator for operating systems Windows Vista, 7, Windows 8, 8.1, 10, Server 2008 -KMSAuto Net - free activator for Windows 7, 8, 10 and Office 2010, 2013 keys KMSAuto Net - the main activator for Windows and Office. ... 8a78ff9644
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/CRACK Native Instruments Solid EQ V1.1.1 Update-R2R [deepstatus] !EXCLUSIVE!.md b/spaces/inreVtussa/clothingai/Examples/CRACK Native Instruments Solid EQ V1.1.1 Update-R2R [deepstatus] !EXCLUSIVE!.md deleted file mode 100644 index c0ec864f2d15cc07fd16d1f25a6a785c4c2c148a..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/CRACK Native Instruments Solid EQ V1.1.1 Update-R2R [deepstatus] !EXCLUSIVE!.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

_VERIFIED_ Native Instruments Logic Pro X 9.5.2 R2R. Native Instruments Electro-Harmonix eb3D wireless effects mic,pearl []. 0f0x-soxbtd.mp3 [url= support. The first entrance of the slow-motion may be the king-fisher or woodpecker mommy. Native Instruments Kipling Pro Crack 1.2.1 [url=https://box.cloudinary.com/linaro-silicon-acquisitions/49b/1574/78f/b52/wiping.zip]Kinetic Fusion 1.0.5.0 Full Premium 5 Patch 100% Working[/url] > Powermac G5. CRACK Native Instruments Solid EQ V1.1.1 Update-R2R [deepstatus]

First of all, I had zero intention of taking part in this contest, but having downloaded this guitar-fret-work app, I got so fond of it that I felt compelled to give the developers my feedback on it. Download CRACK Native Instruments Solid Bus Comp V1.1.1 Update-R2R [deepstatus] Here.

-

CRACK Native Instruments Solid EQ V1.1.1 Update-R2R [deepstatus]


DOWNLOAD ✸✸✸ https://tiurll.com/2uCizR



-

satellite eclipse crack and serial number
The Numbers Are In: Is Verizon Really Losing Billions Of Customers?
indian black magic pdf download
http://lypdess.net/phpBB/viewtopic.php?t=1173
native instruments solid dynamics 7.0.2 update b3
Piano Booster 1.0.5.2
Burgiss makt: the betrayal (Forever drama)
Cinefacts! - Stereoscopic movie player
PALandDB - Client v3.4.8
GaussFire M1 - Soundfont Editor Free 6.5
Phychok Ver.2.1.1.09-R2R
KLDFurnace - KLDFurnace v3.6.16
Jeuxofit Crack v1.1.2-R2R





-

Native Instruments Reaktor V1.3.3 - R2R [deepstatus]. CRACK Native Instruments Solid Bus Comp v1.1.1 Update-R2R [deepstatus] crack glidos 1.53 Bubble Shooter Premium Edition Free Crack Serial Keygen.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Contabilidad De Costos 12 Edicion Horngren Solucionario.md b/spaces/inreVtussa/clothingai/Examples/Contabilidad De Costos 12 Edicion Horngren Solucionario.md deleted file mode 100644 index 873b1514685aa172b4f4571a3a4fc7e515c36551..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Contabilidad De Costos 12 Edicion Horngren Solucionario.md +++ /dev/null @@ -1,6 +0,0 @@ -

contabilidad de costos 12 edicion horngren solucionario


Download Zip ⇒⇒⇒ https://tiurll.com/2uClv0



- -º Ed por Horngren,Harrison, Oliver Las soluciones MANUALES A Adaptive Control, 2. ... más libros sobre contabilidad de costos horngren solucionario, puede utilizar las ... Home ; CONTABILIDAD OCTAVA EDICION HORNGREN HARRISON OLIVER Author: Margarita Almendarez. 1995 downloads 4352 Views 12MB Size. 4d29de3e1b
-
-
-

diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/text/japanese.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/jaisidhsingh/cluster-summ/utils/sentence_embedding.py b/spaces/jaisidhsingh/cluster-summ/utils/sentence_embedding.py deleted file mode 100644 index d67c69461c7eb39285c8206a436fa82e309ef5df..0000000000000000000000000000000000000000 --- a/spaces/jaisidhsingh/cluster-summ/utils/sentence_embedding.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -import sys -cwd = os.getcwd() -module2add = '/'.join(cwd.split("/")[:-1]) -sys.path.append(module2add) - -from configs.model_config import cfg as model_configs - -from transformers import AutoTokenizer, AutoModel -import torch - - -def mean_pooling(model_output, attention_mask): - token_embeddings = model_output[0] - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) - sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) - return sum_embeddings / sum_mask - -def make_embeddings(sentence_list, pool_fn): - tokenizer = AutoTokenizer.from_pretrained(model_configs.sent_model_name) - model = AutoModel.from_pretrained(model_configs.sent_model_name) - - encoded_input = tokenizer( - sentence_list, - padding=True, - truncation=True, - max_length=model_configs.sent_model_seq_limit, - return_tensors='pt' - ) - with torch.no_grad(): - embeddings = model(**encoded_input) - - attn_mask = encoded_input['attention_mask'] - sentence_embeddings = pool_fn(embeddings, attn_mask) - return sentence_embeddings - -def test_embedder(): - sentences = ['This framework generates embeddings for each input sentence', - 'Sentences are passed as a list of string.', - 'The quick brown fox jumps over the lazy dog.'] - - embeddings = make_embeddings(sentences) - print(embeddings.shape) diff --git a/spaces/jbetker/tortoise/tortoise/utils/tokenizer.py b/spaces/jbetker/tortoise/tortoise/utils/tokenizer.py deleted file mode 100644 index 2f36a064f71388645b0a2f4a7a60eff983c683de..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/utils/tokenizer.py +++ /dev/null @@ -1,187 +0,0 @@ -import re - -import inflect -import torch -from tokenizers import Tokenizer - - -# Regular expression matching whitespace: -from unidecode import unidecode - -_whitespace_re = re.compile(r'\s+') - - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including number and abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - text = text.replace('"', '') - return text - -def lev_distance(s1, s2): - if len(s1) > len(s2): - s1, s2 = s2, s1 - - distances = range(len(s1) + 1) - for i2, c2 in enumerate(s2): - distances_ = [i2 + 1] - for i1, c1 in enumerate(s1): - if c1 == c2: - distances_.append(distances[i1]) - else: - distances_.append(1 + min((distances[i1], distances[i1 + 1], distances_[-1]))) - distances = distances_ - return distances[-1] - -class VoiceBpeTokenizer: - def __init__(self, vocab_file='tortoise/data/tokenizer.json'): - if vocab_file is not None: - self.tokenizer = Tokenizer.from_file(vocab_file) - - def preprocess_text(self, txt): - txt = english_cleaners(txt) - return txt - - def encode(self, txt): - txt = self.preprocess_text(txt) - txt = txt.replace(' ', '[SPACE]') - return self.tokenizer.encode(txt).ids - - def decode(self, seq): - if isinstance(seq, torch.Tensor): - seq = seq.cpu().numpy() - txt = self.tokenizer.decode(seq, skip_special_tokens=False).replace(' ', '') - txt = txt.replace('[SPACE]', ' ') - txt = txt.replace('[STOP]', '') - txt = txt.replace('[UNK]', '') - return txt \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/badge.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/badge.tsx deleted file mode 100644 index 8a05c5e844f6551efb3b35a0a23c748a9a6639b4..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const badgeVariants = cva( - "inline-flex items-center rounded-full border border-stone-200 px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-stone-400 focus:ring-offset-2 dark:border-stone-800 dark:focus:ring-stone-800", - { - variants: { - variant: { - default: - "border-transparent bg-stone-900 text-stone-50 hover:bg-stone-900/80 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/80", - secondary: - "border-transparent bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80", - destructive: - "border-transparent bg-red-500 text-stone-50 hover:bg-red-500/80 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/80", - outline: "text-stone-950 dark:text-stone-50", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
- ) -} - -export { Badge, badgeVariants } diff --git a/spaces/jbilcke-hf/observer/src/app/page.tsx b/spaces/jbilcke-hf/observer/src/app/page.tsx deleted file mode 100644 index c35a4b7871b25940270c36356e199f91cd9457a7..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/app/page.tsx +++ /dev/null @@ -1,28 +0,0 @@ -"use server" - -import Head from "next/head" - -import Main from "./main" -import { TooltipProvider } from "@/components/ui/tooltip" - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts - -export default async function IndexPage({ params: { ownerId } }: { params: { ownerId: string }}) { - return ( - <> - - - - - -
- -
- -
- - ) -} \ No newline at end of file diff --git a/spaces/jiejiejie0420/bingo/src/app/page.tsx b/spaces/jiejiejie0420/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
- - - ) -} diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py deleted file mode 100644 index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py +++ /dev/null @@ -1,509 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import ONNXVITS_modules as modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - self.w = None - self.reverse = None - self.noise_scale = None - def forward(self, x, x_mask, g=None): - w = self.w - reverse = self.reverse - noise_scale = self.noise_scale - - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - self.reverse = None - def forward(self, x, x_mask, g=None): - reverse = self.reverse - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t] - x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask # z, m, logs : [b, h, t] - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - - if n_speakers > 0: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None): - torch.onnx.export( - self.enc_p, - (x, x_lengths), - "ONNX_net/enc_p.onnx", - input_names=["x", "x_lengths"], - output_names=["xout", "m_p", "logs_p", "x_mask"], - dynamic_axes={ - "x" : [1], - "xout" : [2], - "m_p" : [2], - "logs_p" : [2], - "x_mask" : [2] - }, - verbose=True, - ) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - self.dp.reverse = True - self.dp.noise_scale = noise_scale_w - torch.onnx.export( - self.dp, - (x, x_mask, g), - "ONNX_net/dp.onnx", - input_names=["x", "x_mask", "g"], - output_names=["logw"], - dynamic_axes={ - "x" : [2], - "x_mask" : [2], - "logw" : [2] - }, - verbose=True, - ) - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - - self.flow.reverse = True - torch.onnx.export( - self.flow, - (z_p, y_mask, g), - "ONNX_net/flow.onnx", - input_names=["z_p", "y_mask", "g"], - output_names=["z"], - dynamic_axes={ - "z_p" : [2], - "y_mask" : [2], - "z" : [2] - }, - verbose=True, - ) - z = self.flow(z_p, y_mask, g=g) - z_in = (z * y_mask)[:,:,:max_len] - - torch.onnx.export( - self.dec, - (z_in, g), - "ONNX_net/dec.onnx", - input_names=["z_in", "g"], - output_names=["o"], - dynamic_axes={ - "z_in" : [2], - "o" : [2] - }, - verbose=True, - ) - o = self.dec(z_in, g=g) - return o diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/memory_wrapper.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/memory_wrapper.py deleted file mode 100644 index ce8cb892057cf5c21e0eb4d30195980aafa24911..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/langchain_helpers/memory_wrapper.py +++ /dev/null @@ -1,197 +0,0 @@ -"""Langchain memory wrapper (for LlamaIndex).""" - -from typing import Any, Dict, List, Optional - -from langchain.memory.chat_memory import BaseChatMemory -from langchain.schema import AIMessage -from langchain.schema import BaseMemory as Memory -from langchain.schema import BaseMessage, HumanMessage -from pydantic import Field - -from gpt_index.indices.base import BaseGPTIndex -from gpt_index.readers.schema.base import Document -from gpt_index.utils import get_new_id - - -def get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) -> str: - """Get prompt input key. - - Copied over from langchain. - - """ - # "stop" is a special key that can be passed as input but is not used to - # format the prompt. - prompt_input_keys = list(set(inputs).difference(memory_variables + ["stop"])) - if len(prompt_input_keys) != 1: - raise ValueError(f"One input key expected got {prompt_input_keys}") - return prompt_input_keys[0] - - -class GPTIndexMemory(Memory): - """Langchain memory wrapper (for LlamaIndex). - - Args: - human_prefix (str): Prefix for human input. Defaults to "Human". - ai_prefix (str): Prefix for AI output. Defaults to "AI". - memory_key (str): Key for memory. Defaults to "history". - index (BaseGPTIndex): LlamaIndex instance. - query_kwargs (Dict[str, Any]): Keyword arguments for LlamaIndex query. - input_key (Optional[str]): Input key. Defaults to None. - output_key (Optional[str]): Output key. Defaults to None. - - """ - - human_prefix: str = "Human" - ai_prefix: str = "AI" - memory_key: str = "history" - index: BaseGPTIndex - query_kwargs: Dict = Field(default_factory=dict) - output_key: Optional[str] = None - input_key: Optional[str] = None - - @property - def memory_variables(self) -> List[str]: - """Return memory variables.""" - return [self.memory_key] - - def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str: - if self.input_key is None: - prompt_input_key = get_prompt_input_key(inputs, self.memory_variables) - else: - prompt_input_key = self.input_key - return prompt_input_key - - def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: - """Return key-value pairs given the text input to the chain.""" - prompt_input_key = self._get_prompt_input_key(inputs) - query_str = inputs[prompt_input_key] - - # TODO: wrap in prompt - # TODO: add option to return the raw text - # NOTE: currently it's a hack - response = self.index.query(query_str, **self.query_kwargs) - return {self.memory_key: str(response)} - - def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: - """Save the context of this model run to memory.""" - prompt_input_key = self._get_prompt_input_key(inputs) - if self.output_key is None: - if len(outputs) != 1: - raise ValueError(f"One output key expected, got {outputs.keys()}") - output_key = list(outputs.keys())[0] - else: - output_key = self.output_key - human = f"{self.human_prefix}: " + inputs[prompt_input_key] - ai = f"{self.ai_prefix}: " + outputs[output_key] - doc_text = "\n".join([human, ai]) - doc = Document(text=doc_text) - self.index.insert(doc) - - def clear(self) -> None: - """Clear memory contents.""" - pass - - def __repr__(self) -> str: - """Return representation.""" - return "GPTIndexMemory()" - - -class GPTIndexChatMemory(BaseChatMemory): - """Langchain chat memory wrapper (for LlamaIndex). - - Args: - human_prefix (str): Prefix for human input. Defaults to "Human". - ai_prefix (str): Prefix for AI output. Defaults to "AI". - memory_key (str): Key for memory. Defaults to "history". - index (BaseGPTIndex): LlamaIndex instance. - query_kwargs (Dict[str, Any]): Keyword arguments for LlamaIndex query. - input_key (Optional[str]): Input key. Defaults to None. - output_key (Optional[str]): Output key. Defaults to None. - - """ - - human_prefix: str = "Human" - ai_prefix: str = "AI" - memory_key: str = "history" - index: BaseGPTIndex - query_kwargs: Dict = Field(default_factory=dict) - output_key: Optional[str] = None - input_key: Optional[str] = None - - return_source: bool = False - id_to_message: Dict[str, BaseMessage] = Field(default_factory=dict) - - @property - def memory_variables(self) -> List[str]: - """Return memory variables.""" - return [self.memory_key] - - def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str: - if self.input_key is None: - prompt_input_key = get_prompt_input_key(inputs, self.memory_variables) - else: - prompt_input_key = self.input_key - return prompt_input_key - - def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: - """Return key-value pairs given the text input to the chain.""" - prompt_input_key = self._get_prompt_input_key(inputs) - query_str = inputs[prompt_input_key] - - response_obj = self.index.query(query_str, **self.query_kwargs) - if self.return_source: - source_nodes = response_obj.source_nodes - if self.return_messages: - # get source messages from ids - source_ids = [sn.doc_id for sn in source_nodes] - source_messages = [ - m for id, m in self.id_to_message.items() if id in source_ids - ] - # NOTE: type List[BaseMessage] - response: Any = source_messages - else: - source_texts = [sn.source_text for sn in source_nodes] - response = "\n\n".join(source_texts) - else: - response = str(response_obj) - return {self.memory_key: response} - - def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: - """Save the context of this model run to memory.""" - prompt_input_key = self._get_prompt_input_key(inputs) - if self.output_key is None: - if len(outputs) != 1: - raise ValueError(f"One output key expected, got {outputs.keys()}") - output_key = list(outputs.keys())[0] - else: - output_key = self.output_key - - # a bit different than existing langchain implementation - # because we want to track id's for messages - human_message = HumanMessage(content=inputs[prompt_input_key]) - human_message_id = get_new_id(set(self.id_to_message.keys())) - ai_message = AIMessage(content=outputs[output_key]) - ai_message_id = get_new_id( - set(self.id_to_message.keys()).union({human_message_id}) - ) - - self.chat_memory.messages.append(human_message) - self.chat_memory.messages.append(ai_message) - - self.id_to_message[human_message_id] = human_message - self.id_to_message[ai_message_id] = ai_message - - human_txt = f"{self.human_prefix}: " + inputs[prompt_input_key] - ai_txt = f"{self.ai_prefix}: " + outputs[output_key] - human_doc = Document(text=human_txt, doc_id=human_message_id) - ai_doc = Document(text=ai_txt, doc_id=ai_message_id) - self.index.insert(human_doc) - self.index.insert(ai_doc) - - def clear(self) -> None: - """Clear memory contents.""" - pass - - def __repr__(self) -> str: - """Return representation.""" - return "GPTIndexMemory()" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/utils.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/utils.py deleted file mode 100644 index 2842423eabc9ec2ccfb0926bf9b3ee336876bd00..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/utils.py +++ /dev/null @@ -1,61 +0,0 @@ -"""Weaviate utils.""" - -from typing import Any, Dict, List, Set, cast - -from gpt_index.utils import get_new_int_id - -DEFAULT_CLASS_PREFIX_STUB = "Gpt_Index" - - -def get_default_class_prefix(current_id_set: Set = set()) -> str: - """Get default class prefix.""" - return DEFAULT_CLASS_PREFIX_STUB + "_" + str(get_new_int_id(current_id_set)) - - -def validate_client(client: Any) -> None: - """Validate client and import weaviate library.""" - try: - import weaviate # noqa: F401 - from weaviate import Client - - client = cast(Client, client) - except ImportError: - raise ImportError( - "Weaviate is not installed. " - "Please install it with `pip install weaviate-client`." - ) - cast(Client, client) - - -def parse_get_response(response: Dict) -> Dict: - """Parse get response from Weaviate.""" - if "errors" in response: - raise ValueError("Invalid query, got errors: {}".format(response["errors"])) - data_response = response["data"] - if "Get" not in data_response: - raise ValueError("Invalid query response, must be a Get query.") - - return data_response["Get"] - - -def get_by_id( - client: Any, object_id: str, class_name: str, properties: List[str] -) -> Dict: - """Get response by id from Weaviate.""" - validate_client(client) - - where_filter = {"path": ["id"], "operator": "Equal", "valueString": object_id} - query_result = ( - client.query.get(class_name, properties) - .with_where(where_filter) - .with_additional(["id", "vector"]) - .do() - ) - - parsed_result = parse_get_response(query_result) - entries = parsed_result[class_name] - if len(entries) == 0: - raise ValueError("No entry found for the given id") - elif len(entries) > 1: - raise ValueError("More than one entry found for the given id") - return entries[0] diff --git a/spaces/johnnygreco/the-gpt-who-lived/README.md b/spaces/johnnygreco/the-gpt-who-lived/README.md deleted file mode 100644 index 06ed49c2913dce26c485b72ffd003b09aedab643..0000000000000000000000000000000000000000 --- a/spaces/johnnygreco/the-gpt-who-lived/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🧙‍♂️ The GPT Who Lived 🤖 -emoji: 🪄 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/johnslegers/stable-diffusion/app_basic.py b/spaces/johnslegers/stable-diffusion/app_basic.py deleted file mode 100644 index 838527b0c0d713ff509c7bb9ee7f8f37d70ff723..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion/app_basic.py +++ /dev/null @@ -1,70 +0,0 @@ -import gradio as gr -import torch -import os -from diffusers import StableDiffusionPipeline - -def dummy(images, **kwargs): - return images, False - -model_id = "CompVis/stable-diffusion-v1-4" - -AUTH_TOKEN = os.environ.get('AUTH_TOKEN') -if not AUTH_TOKEN: - with open('/root/.huggingface/token') as f: - lines = f.readlines() - AUTH_TOKEN = lines[0] - -device = "cuda" if torch.cuda.is_available() else "cpu" -if device == "cuda": - print('Nvidia GPU detected!') - share = True - pipe = StableDiffusionPipeline.from_pretrained( - model_id, - use_auth_token=AUTH_TOKEN, - revision="fp16", - torch_dtype=torch.float16 - ) -else: - print('No Nvidia GPU in system!') - share = False - pipe = StableDiffusionPipeline.from_pretrained( - model_id, - use_auth_token=AUTH_TOKEN - ) - -pipe.to(device) -pipe.safety_checker = dummy - -def infer(prompt="", samples=4, steps=20, scale=7.5, seed=1437181781): - generator = torch.Generator(device=device).manual_seed(seed) - images = [] - images_list = pipe( - [prompt] * samples, - num_inference_steps=steps, - guidance_scale=scale, - generator=generator, - ) - - for i, image in enumerate(images_list["sample"]): - images.append(image) - - return images - -gr.Interface( - fn=infer, - inputs=[ - gr.Textbox(lines=2, placeholder="Prompt here..."), - gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1), - gr.Slider(label="Steps", minimum=1, maximum=50, value=20, step=1), - gr.Slider(label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1), - gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True) - ], - outputs=gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery" - ).style(grid=[2], height="auto"), -).launch( - share=share, - enable_queue=True -) diff --git a/spaces/joshuasundance/langchain-streamlit-demo/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/joshuasundance/langchain-streamlit-demo/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 628665035450c13871c7dc408eb8d7bf665a8796..0000000000000000000000000000000000000000 --- a/spaces/joshuasundance/langchain-streamlit-demo/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: enhancement -assignees: '' - ---- - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/spaces/jpfearnworks/ai_agents/modules/settings/component.py b/spaces/jpfearnworks/ai_agents/modules/settings/component.py deleted file mode 100644 index bc0f9fc44927f4ff3c67c8aee0c5947b14df5de7..0000000000000000000000000000000000000000 --- a/spaces/jpfearnworks/ai_agents/modules/settings/component.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr -from modules.settings.user_settings import UserSettings - -def set_api_key(key: str): - UserSettings.get_instance().set_api_key(key) - return "API key set" - - -def create_settings_ui(): - settings = UserSettings.get_instance() - api_key_default = settings.get_api_key() - api_key = gr.Textbox(label="You OpenAI API key", type="password") - set_status = gr.Text() - key_button = gr.Button(label="Set Key") - key_button.click(set_api_key, outputs=[set_status], inputs=[api_key]) \ No newline at end of file diff --git a/spaces/juliensimon/battle_of_image_classifiers/README.md b/spaces/juliensimon/battle_of_image_classifiers/README.md deleted file mode 100644 index 4e03e7cae852f213ced4b0786c7c33facfaf513a..0000000000000000000000000000000000000000 --- a/spaces/juliensimon/battle_of_image_classifiers/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Battle Of The Image Classifiers -emoji: 🐨 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kamidara/lolipaoi02/Dockerfile b/spaces/kamidara/lolipaoi02/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/kamidara/lolipaoi02/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/kdrkdrkdr/HutaoTTS/text/symbols.py b/spaces/kdrkdrkdr/HutaoTTS/text/symbols.py deleted file mode 100644 index 8648bd1e2ac0cfe99e0eaab6540c56baf668fe14..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HutaoTTS/text/symbols.py +++ /dev/null @@ -1,74 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -'''# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' -''' - -'''# sanskrit_cleaners -_pad = '_' -_punctuation = '।' -_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ ' -''' - -'''# cjks_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ ' -''' - -'''# thai_cleaners -_pad = '_' -_punctuation = '.!? ' -_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์' -''' - -'''# cjke_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ ' -''' - -'''# shanghainese_cleaners -_pad = '_' -_punctuation = ',.!?…' -_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 ' -''' - -'''# chinese_dialect_cleaners -_pad = '_' -_punctuation = ',.!?~…─' -_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚αᴀᴇ↑↓∅ⱼ ' -''' - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/keras-io/Flowers-Classification-MobileViT/app.py b/spaces/keras-io/Flowers-Classification-MobileViT/app.py deleted file mode 100644 index 454f88d45eb107eb994fd1467612c28c6484b531..0000000000000000000000000000000000000000 --- a/spaces/keras-io/Flowers-Classification-MobileViT/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -import tensorflow as tf -from huggingface_hub import from_pretrained_keras -import numpy as np - -model = from_pretrained_keras("keras-io/mobile-vit-xxs") - -classes=['dandelion','daisy','tulip','sunflower','rose'] -image_size = 256 - -def classify_images(image): - image = tf.convert_to_tensor(image) - image = tf.image.resize(image, (image_size, image_size)) - image = tf.expand_dims(image,axis=0) - prediction = model.predict(image) - prediction = tf.squeeze(tf.round(prediction)) - text_output = str(f'{classes[(np.argmax(prediction))]}!') - return text_output - -i = gr.inputs.Image() -o = gr.outputs.Textbox() - -examples = [["./examples/tulip.png"], ["./examples/daisy.jpeg"], ["./examples/dandelion.jpeg"], ["./examples/rose.png"], ["./examples/sunflower.png"]] -title = "Flowers Classification MobileViT" -description = "Upload an image or select from examples to classify flowers" - -article = "" -gr.Interface(classify_images, i, o, allow_flagging=False, analytics_enabled=False, - title=title, examples=examples, description=description, article=article).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/hubert/__init__.py b/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/data/image_folder.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/data/image_folder.py deleted file mode 100644 index efadc2ecbe2fb4b53b78230aba25ec505eff0e55..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/data/image_folder.py +++ /dev/null @@ -1,66 +0,0 @@ -"""A modified image folder class - -We modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py) -so that this class can load images from both current directory and its subdirectories. -""" -import numpy as np -import torch.utils.data as data - -from PIL import Image -import os -import os.path - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', - '.tif', '.TIF', '.tiff', '.TIFF', -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(dir, max_dataset_size=float("inf")): - images = [] - assert os.path.isdir(dir) or os.path.islink(dir), '%s is not a valid directory' % dir - - for root, _, fnames in sorted(os.walk(dir, followlinks=True)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - images.append(path) - return images[:min(max_dataset_size, len(images))] - - -def default_loader(path): - return Image.open(path).convert('RGB') - - -class ImageFolder(data.Dataset): - - def __init__(self, root, transform=None, return_paths=False, - loader=default_loader): - imgs = make_dataset(root) - if len(imgs) == 0: - raise(RuntimeError("Found 0 images in: " + root + "\n" - "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) - - self.root = root - self.imgs = imgs - self.transform = transform - self.return_paths = return_paths - self.loader = loader - - def __getitem__(self, index): - path = self.imgs[index] - img = self.loader(path) - if self.transform is not None: - img = self.transform(img) - if self.return_paths: - return img, path - else: - return img - - def __len__(self): - return len(self.imgs) diff --git a/spaces/kevinwang676/ControlNet-with-GPT-4/model.py b/spaces/kevinwang676/ControlNet-with-GPT-4/model.py deleted file mode 100644 index 5232a46c7bdb1cab775025b495c46381560c2ebe..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ControlNet-with-GPT-4/model.py +++ /dev/null @@ -1,674 +0,0 @@ -from __future__ import annotations - -import gc - -import numpy as np -import PIL.Image -import torch -from controlnet_aux.util import HWC3 -from diffusers import ( - ControlNetModel, - DiffusionPipeline, - StableDiffusionControlNetPipeline, - UniPCMultistepScheduler, -) - -from cv_utils import resize_image -from preprocessor import Preprocessor -from settings import MAX_IMAGE_RESOLUTION, MAX_NUM_IMAGES - -CONTROLNET_MODEL_IDS = { - "Openpose": "lllyasviel/control_v11p_sd15_openpose", - "Canny": "lllyasviel/control_v11p_sd15_canny", - "MLSD": "lllyasviel/control_v11p_sd15_mlsd", - "scribble": "lllyasviel/control_v11p_sd15_scribble", - "softedge": "lllyasviel/control_v11p_sd15_softedge", - "segmentation": "lllyasviel/control_v11p_sd15_seg", - "depth": "lllyasviel/control_v11f1p_sd15_depth", - "NormalBae": "lllyasviel/control_v11p_sd15_normalbae", - "lineart": "lllyasviel/control_v11p_sd15_lineart", - "lineart_anime": "lllyasviel/control_v11p_sd15s2_lineart_anime", - "shuffle": "lllyasviel/control_v11e_sd15_shuffle", - "ip2p": "lllyasviel/control_v11e_sd15_ip2p", - "inpaint": "lllyasviel/control_v11e_sd15_inpaint", -} - - -def download_all_controlnet_weights() -> None: - for model_id in CONTROLNET_MODEL_IDS.values(): - ControlNetModel.from_pretrained(model_id) - - -class Model: - def __init__(self, base_model_id: str = "runwayml/stable-diffusion-v1-5", task_name: str = "Canny"): - self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - self.base_model_id = "" - self.task_name = "" - self.pipe = self.load_pipe(base_model_id, task_name) - self.preprocessor = Preprocessor() - - def load_pipe(self, base_model_id: str, task_name) -> DiffusionPipeline: - if ( - base_model_id == self.base_model_id - and task_name == self.task_name - and hasattr(self, "pipe") - and self.pipe is not None - ): - return self.pipe - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, torch_dtype=torch.float16) - pipe = StableDiffusionControlNetPipeline.from_pretrained( - base_model_id, safety_checker=None, controlnet=controlnet, torch_dtype=torch.float16 - ) - pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - if self.device.type == "cuda": - pipe.enable_xformers_memory_efficient_attention() - pipe.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.base_model_id = base_model_id - self.task_name = task_name - return pipe - - def set_base_model(self, base_model_id: str) -> str: - if not base_model_id or base_model_id == self.base_model_id: - return self.base_model_id - del self.pipe - torch.cuda.empty_cache() - gc.collect() - try: - self.pipe = self.load_pipe(base_model_id, self.task_name) - except Exception: - self.pipe = self.load_pipe(self.base_model_id, self.task_name) - return self.base_model_id - - def load_controlnet_weight(self, task_name: str) -> None: - if task_name == self.task_name: - return - if self.pipe is not None and hasattr(self.pipe, "controlnet"): - del self.pipe.controlnet - torch.cuda.empty_cache() - gc.collect() - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, torch_dtype=torch.float16) - controlnet.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.pipe.controlnet = controlnet - self.task_name = task_name - - def get_prompt(self, prompt: str, additional_prompt: str) -> str: - if not prompt: - prompt = additional_prompt - else: - prompt = f"{prompt}, {additional_prompt}" - return prompt - - @torch.autocast("cuda") - def run_pipe( - self, - prompt: str, - negative_prompt: str, - control_image: PIL.Image.Image, - num_images: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - generator = torch.Generator().manual_seed(seed) - return self.pipe( - prompt=prompt, - negative_prompt=negative_prompt, - guidance_scale=guidance_scale, - num_images_per_prompt=num_images, - num_inference_steps=num_steps, - generator=generator, - image=control_image, - ).images - - @torch.inference_mode() - def process_canny( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - low_threshold: int, - high_threshold: int, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - self.preprocessor.load("Canny") - control_image = self.preprocessor( - image=image, low_threshold=low_threshold, high_threshold=high_threshold, detect_resolution=image_resolution - ) - - self.load_controlnet_weight("Canny") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_mlsd( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - value_threshold: float, - distance_threshold: float, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - self.preprocessor.load("MLSD") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - thr_v=value_threshold, - thr_d=distance_threshold, - ) - self.load_controlnet_weight("MLSD") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_scribble( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name == "HED": - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=False, - ) - elif preprocessor_name == "PidiNet": - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=False, - ) - self.load_controlnet_weight("scribble") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_scribble_interactive( - self, - image_and_mask: dict[str, np.ndarray], - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - if image_and_mask is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - image = image_and_mask["mask"] - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - - self.load_controlnet_weight("scribble") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_softedge( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ["HED", "HED safe"]: - safe = "safe" in preprocessor_name - self.preprocessor.load("HED") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=safe, - ) - elif preprocessor_name in ["PidiNet", "PidiNet safe"]: - safe = "safe" in preprocessor_name - self.preprocessor.load("PidiNet") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=safe, - ) - else: - raise ValueError - self.load_controlnet_weight("softedge") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_openpose( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load("Openpose") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - hand_and_face=True, - ) - self.load_controlnet_weight("Openpose") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_segmentation( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight("segmentation") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_depth( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight("depth") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_normal( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load("NormalBae") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight("NormalBae") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_lineart( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name in ["None", "None (anime)"]: - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ["Lineart", "Lineart coarse"]: - coarse = "coarse" in preprocessor_name - self.preprocessor.load("Lineart") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - coarse=coarse, - ) - elif preprocessor_name == "Lineart (anime)": - self.preprocessor.load("LineartAnime") - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - if "anime" in preprocessor_name: - self.load_controlnet_weight("lineart_anime") - else: - self.load_controlnet_weight("lineart") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_shuffle( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - if preprocessor_name == "None": - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - ) - self.load_controlnet_weight("shuffle") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_ip2p( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - if image is None: - raise ValueError - if image_resolution > MAX_IMAGE_RESOLUTION: - raise ValueError - if num_images > MAX_NUM_IMAGES: - raise ValueError - - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - self.load_controlnet_weight("ip2p") - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results diff --git a/spaces/kevinwang676/SadTalker/README.md b/spaces/kevinwang676/SadTalker/README.md deleted file mode 100644 index 68fbc0409288af25fa253d5997b69c55a43b28b5..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SadTalker -emoji: 🌊 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: mit ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChanger/src/test_audio2coeff.py b/spaces/kevinwang676/VoiceChanger/src/test_audio2coeff.py deleted file mode 100644 index bbf19f494e2127b4ae9d6074b172fddb694d6e34..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/test_audio2coeff.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import torch -import numpy as np -from scipy.io import savemat, loadmat -from yacs.config import CfgNode as CN -from scipy.signal import savgol_filter - -import safetensors -import safetensors.torch - -from src.audio2pose_models.audio2pose import Audio2Pose -from src.audio2exp_models.networks import SimpleWrapperV2 -from src.audio2exp_models.audio2exp import Audio2Exp -from src.utils.safetensor_helper import load_x_from_safetensor - -def load_cpk(checkpoint_path, model=None, optimizer=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if model is not None: - model.load_state_dict(checkpoint['model']) - if optimizer is not None: - optimizer.load_state_dict(checkpoint['optimizer']) - - return checkpoint['epoch'] - -class Audio2Coeff(): - - def __init__(self, sadtalker_path, device): - #load config - fcfg_pose = open(sadtalker_path['audio2pose_yaml_path']) - cfg_pose = CN.load_cfg(fcfg_pose) - cfg_pose.freeze() - fcfg_exp = open(sadtalker_path['audio2exp_yaml_path']) - cfg_exp = CN.load_cfg(fcfg_exp) - cfg_exp.freeze() - - # load audio2pose_model - self.audio2pose_model = Audio2Pose(cfg_pose, None, device=device) - self.audio2pose_model = self.audio2pose_model.to(device) - self.audio2pose_model.eval() - for param in self.audio2pose_model.parameters(): - param.requires_grad = False - - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - self.audio2pose_model.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2pose')) - else: - load_cpk(sadtalker_path['audio2pose_checkpoint'], model=self.audio2pose_model, device=device) - except: - raise Exception("Failed in loading audio2pose_checkpoint") - - # load audio2exp_model - netG = SimpleWrapperV2() - netG = netG.to(device) - for param in netG.parameters(): - netG.requires_grad = False - netG.eval() - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - netG.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2exp')) - else: - load_cpk(sadtalker_path['audio2exp_checkpoint'], model=netG, device=device) - except: - raise Exception("Failed in loading audio2exp_checkpoint") - self.audio2exp_model = Audio2Exp(netG, cfg_exp, device=device, prepare_training_loss=False) - self.audio2exp_model = self.audio2exp_model.to(device) - for param in self.audio2exp_model.parameters(): - param.requires_grad = False - self.audio2exp_model.eval() - - self.device = device - - def generate(self, batch, coeff_save_dir, pose_style, ref_pose_coeff_path=None): - - with torch.no_grad(): - #test - results_dict_exp= self.audio2exp_model.test(batch) - exp_pred = results_dict_exp['exp_coeff_pred'] #bs T 64 - - #for class_id in range(1): - #class_id = 0#(i+10)%45 - #class_id = random.randint(0,46) #46 styles can be selected - batch['class'] = torch.LongTensor([pose_style]).to(self.device) - results_dict_pose = self.audio2pose_model.test(batch) - pose_pred = results_dict_pose['pose_pred'] #bs T 6 - - pose_len = pose_pred.shape[1] - if pose_len<13: - pose_len = int((pose_len-1)/2)*2+1 - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), pose_len, 2, axis=1)).to(self.device) - else: - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), 13, 2, axis=1)).to(self.device) - - coeffs_pred = torch.cat((exp_pred, pose_pred), dim=-1) #bs T 70 - - coeffs_pred_numpy = coeffs_pred[0].clone().detach().cpu().numpy() - - if ref_pose_coeff_path is not None: - coeffs_pred_numpy = self.using_refpose(coeffs_pred_numpy, ref_pose_coeff_path) - - savemat(os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])), - {'coeff_3dmm': coeffs_pred_numpy}) - - return os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])) - - def using_refpose(self, coeffs_pred_numpy, ref_pose_coeff_path): - num_frames = coeffs_pred_numpy.shape[0] - refpose_coeff_dict = loadmat(ref_pose_coeff_path) - refpose_coeff = refpose_coeff_dict['coeff_3dmm'][:,64:70] - refpose_num_frames = refpose_coeff.shape[0] - if refpose_num_frames self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/removeOverlaps.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/removeOverlaps.py deleted file mode 100644 index 624cd47b4076a95cbc7c2124550371f6ffa5ea37..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/removeOverlaps.py +++ /dev/null @@ -1,248 +0,0 @@ -""" Simplify TrueType glyphs by merging overlapping contours/components. - -Requires https://github.com/fonttools/skia-pathops -""" - -import itertools -import logging -from typing import Callable, Iterable, Optional, Mapping - -from fontTools.misc.roundTools import otRound -from fontTools.ttLib import ttFont -from fontTools.ttLib.tables import _g_l_y_f -from fontTools.ttLib.tables import _h_m_t_x -from fontTools.pens.ttGlyphPen import TTGlyphPen - -import pathops - - -__all__ = ["removeOverlaps"] - - -class RemoveOverlapsError(Exception): - pass - - -log = logging.getLogger("fontTools.ttLib.removeOverlaps") - -_TTGlyphMapping = Mapping[str, ttFont._TTGlyph] - - -def skPathFromGlyph(glyphName: str, glyphSet: _TTGlyphMapping) -> pathops.Path: - path = pathops.Path() - pathPen = path.getPen(glyphSet=glyphSet) - glyphSet[glyphName].draw(pathPen) - return path - - -def skPathFromGlyphComponent( - component: _g_l_y_f.GlyphComponent, glyphSet: _TTGlyphMapping -): - baseGlyphName, transformation = component.getComponentInfo() - path = skPathFromGlyph(baseGlyphName, glyphSet) - return path.transform(*transformation) - - -def componentsOverlap(glyph: _g_l_y_f.Glyph, glyphSet: _TTGlyphMapping) -> bool: - if not glyph.isComposite(): - raise ValueError("This method only works with TrueType composite glyphs") - if len(glyph.components) < 2: - return False # single component, no overlaps - - component_paths = {} - - def _get_nth_component_path(index: int) -> pathops.Path: - if index not in component_paths: - component_paths[index] = skPathFromGlyphComponent( - glyph.components[index], glyphSet - ) - return component_paths[index] - - return any( - pathops.op( - _get_nth_component_path(i), - _get_nth_component_path(j), - pathops.PathOp.INTERSECTION, - fix_winding=False, - keep_starting_points=False, - ) - for i, j in itertools.combinations(range(len(glyph.components)), 2) - ) - - -def ttfGlyphFromSkPath(path: pathops.Path) -> _g_l_y_f.Glyph: - # Skia paths have no 'components', no need for glyphSet - ttPen = TTGlyphPen(glyphSet=None) - path.draw(ttPen) - glyph = ttPen.glyph() - assert not glyph.isComposite() - # compute glyph.xMin (glyfTable parameter unused for non composites) - glyph.recalcBounds(glyfTable=None) - return glyph - - -def _round_path( - path: pathops.Path, round: Callable[[float], float] = otRound -) -> pathops.Path: - rounded_path = pathops.Path() - for verb, points in path: - rounded_path.add(verb, *((round(p[0]), round(p[1])) for p in points)) - return rounded_path - - -def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path: - # skia-pathops has a bug where it sometimes fails to simplify paths when there - # are float coordinates and control points are very close to one another. - # Rounding coordinates to integers works around the bug. - # Since we are going to round glyf coordinates later on anyway, here it is - # ok(-ish) to also round before simplify. Better than failing the whole process - # for the entire font. - # https://bugs.chromium.org/p/skia/issues/detail?id=11958 - # https://github.com/google/fonts/issues/3365 - # TODO(anthrotype): remove once this Skia bug is fixed - try: - return pathops.simplify(path, clockwise=path.clockwise) - except pathops.PathOpsError: - pass - - path = _round_path(path) - try: - path = pathops.simplify(path, clockwise=path.clockwise) - log.debug( - "skia-pathops failed to simplify '%s' with float coordinates, " - "but succeded using rounded integer coordinates", - debugGlyphName, - ) - return path - except pathops.PathOpsError as e: - if log.isEnabledFor(logging.DEBUG): - path.dump() - raise RemoveOverlapsError( - f"Failed to remove overlaps from glyph {debugGlyphName!r}" - ) from e - - raise AssertionError("Unreachable") - - -def removeTTGlyphOverlaps( - glyphName: str, - glyphSet: _TTGlyphMapping, - glyfTable: _g_l_y_f.table__g_l_y_f, - hmtxTable: _h_m_t_x.table__h_m_t_x, - removeHinting: bool = True, -) -> bool: - glyph = glyfTable[glyphName] - # decompose composite glyphs only if components overlap each other - if ( - glyph.numberOfContours > 0 - or glyph.isComposite() - and componentsOverlap(glyph, glyphSet) - ): - path = skPathFromGlyph(glyphName, glyphSet) - - # remove overlaps - path2 = _simplify(path, glyphName) - - # replace TTGlyph if simplified path is different (ignoring contour order) - if {tuple(c) for c in path.contours} != {tuple(c) for c in path2.contours}: - glyfTable[glyphName] = glyph = ttfGlyphFromSkPath(path2) - # simplified glyph is always unhinted - assert not glyph.program - # also ensure hmtx LSB == glyph.xMin so glyph origin is at x=0 - width, lsb = hmtxTable[glyphName] - if lsb != glyph.xMin: - hmtxTable[glyphName] = (width, glyph.xMin) - return True - - if removeHinting: - glyph.removeHinting() - return False - - -def removeOverlaps( - font: ttFont.TTFont, - glyphNames: Optional[Iterable[str]] = None, - removeHinting: bool = True, - ignoreErrors=False, -) -> None: - """Simplify glyphs in TTFont by merging overlapping contours. - - Overlapping components are first decomposed to simple contours, then merged. - - Currently this only works with TrueType fonts with 'glyf' table. - Raises NotImplementedError if 'glyf' table is absent. - - Note that removing overlaps invalidates the hinting. By default we drop hinting - from all glyphs whether or not overlaps are removed from a given one, as it would - look weird if only some glyphs are left (un)hinted. - - Args: - font: input TTFont object, modified in place. - glyphNames: optional iterable of glyph names (str) to remove overlaps from. - By default, all glyphs in the font are processed. - removeHinting (bool): set to False to keep hinting for unmodified glyphs. - ignoreErrors (bool): set to True to ignore errors while removing overlaps, - thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363). - """ - try: - glyfTable = font["glyf"] - except KeyError: - raise NotImplementedError("removeOverlaps currently only works with TTFs") - - hmtxTable = font["hmtx"] - # wraps the underlying glyf Glyphs, takes care of interfacing with drawing pens - glyphSet = font.getGlyphSet() - - if glyphNames is None: - glyphNames = font.getGlyphOrder() - - # process all simple glyphs first, then composites with increasing component depth, - # so that by the time we test for component intersections the respective base glyphs - # have already been simplified - glyphNames = sorted( - glyphNames, - key=lambda name: ( - glyfTable[name].getCompositeMaxpValues(glyfTable).maxComponentDepth - if glyfTable[name].isComposite() - else 0, - name, - ), - ) - modified = set() - for glyphName in glyphNames: - try: - if removeTTGlyphOverlaps( - glyphName, glyphSet, glyfTable, hmtxTable, removeHinting - ): - modified.add(glyphName) - except RemoveOverlapsError: - if not ignoreErrors: - raise - log.error("Failed to remove overlaps for '%s'", glyphName) - - log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified)) - - -def main(args=None): - import sys - - if args is None: - args = sys.argv[1:] - - if len(args) < 2: - print( - f"usage: fonttools ttLib.removeOverlaps INPUT.ttf OUTPUT.ttf [GLYPHS ...]" - ) - sys.exit(1) - - src = args[0] - dst = args[1] - glyphNames = args[2:] or None - - with ttFont.TTFont(src) as f: - removeOverlaps(f, glyphNames) - f.save(dst) - - -if __name__ == "__main__": - main() diff --git a/spaces/lavita/medical-question-answering-datasets/app.py b/spaces/lavita/medical-question-answering-datasets/app.py deleted file mode 100644 index 9ac025657dd7d4a8dda3e07ee9f907549cdd8c98..0000000000000000000000000000000000000000 --- a/spaces/lavita/medical-question-answering-datasets/app.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import gradio as gr - -from dataset_list import DatasetList - -DESCRIPTION = '# Explore Medical Question Answering Datasets 🏥' -NOTES = ''' -''' -FOOTER = '''''' - -def main(): - dataset_list = DatasetList() - - with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - - search_box = gr.Textbox( - label='Search Dataset Name', - placeholder= - 'You can search for titles with regular expressions. e.g. (? torch.device: - return torch.device(0) - - def __call__(self, *args, **kwargs): - use_cache = kwargs.get('use_cache', True) - labels = kwargs.get('labels', None) - past_key_values = kwargs.get('past_key_values', None) - - if len(args) > 0: - if not shared.args.cfg_cache: - logger.error("Please enable the cfg-cache option to use CFG with llamacpp_HF.") - return - - input_ids = args[0] - is_negative = True - past_seq = self.past_seq_negative - self.load_negative_cache() - else: - input_ids = kwargs['input_ids'] - is_negative = False - past_seq = self.past_seq - self.load_cache() - - seq = input_ids[0].tolist() - if is_negative and past_key_values is not None: - seq = past_key_values + seq - - seq_tensor = torch.tensor(seq) - reset = True - - # Make the forward call. The prefix-match code has been adapted from - # https://github.com/abetlen/llama-cpp-python/commit/f4090a0bb2a2a25acfe28d31c82cc1aa273bedee - if labels is None: - if past_seq is not None: - min_length = min(past_seq.shape[0], seq_tensor.shape[0]) - indices = torch.nonzero(~torch.eq(past_seq[:min_length], seq_tensor[:min_length])) - if len(indices) > 0: - longest_prefix = indices[0].item() - else: - longest_prefix = min_length - - if longest_prefix > 0: - reset = False - self.model.n_tokens = longest_prefix - if len(seq_tensor) - longest_prefix > 0: - self.model.eval(seq[longest_prefix:]) - - if reset: - self.model.reset() - self.model.eval(seq) - - logits = torch.tensor(self.model.scores[self.model.n_tokens - 1, :]).view(1, 1, -1).to(input_ids.device) - else: - self.model.reset() - self.model.eval(seq) - logits = torch.tensor(self.model.eval_logits) - logits = logits.view(1, logits.shape[0], logits.shape[1]).to(input_ids.device) - - if is_negative: - self.save_negative_cache() - self.past_seq_negative = seq_tensor - else: - self.save_cache() - self.past_seq = seq_tensor - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - shift_logits = shift_logits.view(-1, logits.shape[-1]) - shift_labels = shift_labels.view(-1) - # Enable model parallelism - shift_labels = shift_labels.to(shift_logits.device) - loss = loss_fct(shift_logits, shift_labels) - - return CausalLMOutputWithPast(logits=logits, past_key_values=seq if use_cache else None, loss=loss) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs): - assert len(model_args) == 0 and len(kwargs) == 0, "extra args is currently not supported" - - if isinstance(pretrained_model_name_or_path, str): - pretrained_model_name_or_path = Path(pretrained_model_name_or_path) - - path = Path(f'{shared.args.model_dir}') / Path(pretrained_model_name_or_path) - if path.is_file(): - model_file = path - else: - model_file = list(path.glob('*.gguf'))[0] - - logger.info(f"llama.cpp weights detected: {model_file}\n") - - if shared.args.tensor_split is None or shared.args.tensor_split.strip() == '': - tensor_split_list = None - else: - tensor_split_list = [float(x) for x in shared.args.tensor_split.strip().split(",")] - - params = { - 'model_path': str(model_file), - 'n_ctx': shared.args.n_ctx, - 'seed': int(shared.args.llama_cpp_seed), - 'n_threads': shared.args.threads or None, - 'n_threads_batch': shared.args.threads_batch or None, - 'n_batch': shared.args.n_batch, - 'use_mmap': not shared.args.no_mmap, - 'use_mlock': shared.args.mlock, - 'mul_mat_q': not shared.args.no_mul_mat_q, - 'numa': shared.args.numa, - 'n_gpu_layers': shared.args.n_gpu_layers, - 'rope_freq_base': RoPE.get_rope_freq_base(shared.args.alpha_value, shared.args.rope_freq_base), - 'tensor_split': tensor_split_list, - 'rope_freq_scale': 1.0 / shared.args.compress_pos_emb, - 'logits_all': True, - } - - Llama = llama_cpp_lib().Llama - model = Llama(**params) - - return LlamacppHF(model, model_file) diff --git a/spaces/lewiswu1209/MockingBird/synthesizer_preprocess_audio.py b/spaces/lewiswu1209/MockingBird/synthesizer_preprocess_audio.py deleted file mode 100644 index 51d92f91a485ea853957127bec9166420daed934..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer_preprocess_audio.py +++ /dev/null @@ -1,65 +0,0 @@ -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - - -recognized_datasets = [ - "aidatatang_200zh", - "magicdata", - "aishell3" -] - -if __name__ == "__main__": - print("This method is deprecaded and will not be longer supported, please use 'pre.py'") - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your LibriSpeech/TTS datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=None, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted.") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("--dataset", type=str, default="aidatatang_200zh", help=\ - "Name of the dataset to process, allowing values: magicdata, aidatatang_200zh.") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - assert args.dataset in recognized_datasets, 'is not supported, please vote for it in https://github.com/babysor/MockingBird/issues/10' - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - # Preprocess the dataset - print_args(args, parser) - args.hparams = hparams.parse(args.hparams) - - preprocess_dataset(**vars(args)) \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Atoll Forsk 3.2 Crack !FREE!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Atoll Forsk 3.2 Crack !FREE!.md deleted file mode 100644 index 4ddb812b53d0b790a827584958b1984be1832ee0..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Atoll Forsk 3.2 Crack !FREE!.md +++ /dev/null @@ -1,96 +0,0 @@ -
-

Atoll Forsk 3.2 Crack - A Powerful Software for Wireless Network Design and Optimization

- -

Wireless network design and optimization is a complex and challenging task that requires a lot of knowledge, skills, and tools. It involves creating, simulating, and evaluating different scenarios and solutions for various network technologies and standards, such as GSM, UMTS, LTE, NB-IoT, CDMA, Wi-Fi, and more. It also involves taking into account various factors and parameters, such as terrain elevation, clutter data, traffic demand, population density, propagation models, antenna characteristics, interference, coverage, capacity, quality of service, and more.

-

Atoll forsk 3.2 crack


Download Zip >>> https://bytlly.com/2uGwk9



- -

One of the most popular and powerful tools for wireless network design and optimization is Atoll Forsk. Atoll Forsk is a software that allows you to design and optimize wireless networks of various technologies and standards. It has many features and capabilities that can help you to create, simulate, and evaluate different scenarios and solutions for your network design and optimization needs.

- -

However, Atoll Forsk is not a free software. It requires a license key to activate and use its full features and capabilities. The license key is usually provided by the software developer or distributor when you purchase the software legally. However, some people may not be able or willing to pay for the software, and may look for alternative ways to get it for free.

- -

One of the common ways to get Atoll Forsk for free is to use a crack. A crack is a modified version of the software that bypasses or removes the license key verification process. By using a crack, you can install and run Atoll Forsk without entering a valid license key. However, using a crack is not legal or ethical, as it violates the terms of service of the software and infringes on the rights of its creators. Moreover, using a crack may expose you to various risks and problems, such as malware infection, data loss, system damage, or legal consequences.

- -

In this article, we will show you where to find Atoll Forsk 3.2 crack and how to download and use it safely and legally. We will also give you some tips on how to use Atoll Forsk 3.2 crack effectively and efficiently for your wireless network design and optimization projects.

-

- -

Where to Find Atoll Forsk 3.2 Crack

- -

There are many websites that offer cracks for various software, including Atoll Forsk 3.2 crack. However, not all of them are reliable or trustworthy. Some of them may contain malware, viruses, or fake files that can harm your computer or compromise your privacy. Therefore, you need to be careful when choosing a website to download Atoll Forsk 3.2 crack.

- -

One of the best places to find Atoll Forsk 3.2 crack is Wannacrack.com. This is a website that provides cracks for various software engineering specialized tools, such as Atoll Forsk 3.2 crack. You can find the link to download Atoll Forsk 3.2 crack here: https://wannacrack.com/software/engineering-specialized/forsk-atoll-3-3-2-10366-x86-x64?t=9kLuRKYQ.

- -

This website is safe and secure, as it does not contain any malware or viruses. It also provides detailed instructions on how to install and use Atoll Forsk 3.2 crack. However, we cannot guarantee that this website will always work or be available, as it may be taken down or blocked by the authorities at any time.

- -

Another option is to use Reddit, a popular online community where users can share and discuss various topics. There is a subreddit dedicated to Atoll Forsk 3.2 crack

-

How to Use Atoll Forsk 3.2 Crack Effectively and Efficiently

- -

Atoll Forsk 3.2 crack is a software that allows you to design and optimize wireless networks of various technologies and standards. It has many features and capabilities that can help you to create, simulate, and evaluate different scenarios and solutions for your network design and optimization needs.

- -

To use Atoll Forsk 3.2 crack effectively and efficiently, you need to have some basic knowledge and skills in wireless network engineering and planning. You also need to have some digital maps of your target area that can provide accurate information about terrain elevation, clutter data, traffic demand, population density, and more.

- -

Here are some tips on how to use Atoll Forsk 3.2 crack effectively and efficiently for your wireless network design and optimization projects:

- -
    -
  • Start by defining your network objectives and requirements, such as coverage, capacity, quality of service, cost, etc.
  • -
  • Select the appropriate network technology and standard for your project, such as GSM, UMTS, LTE, NB-IoT, CDMA, Wi-Fi, etc.
  • -
  • Import or create your digital maps using Atoll Forsk's built-in geographic information system (GIS) or external tools.
  • -
  • Create your network elements, such as base stations, antennas, sectors, carriers, etc., using Atoll Forsk's user-friendly interface or external tools.
  • -
  • Define your propagation models and parameters using Atoll Forsk's automatic propagation model tuning feature or external tools.
  • -
  • Perform network simulations and analyses using Atoll Forsk's powerful prediction engine and multi-resolution prediction plots.
  • -
  • Evaluate your network performance and quality using Atoll Forsk's flexible report generator and statistics.
  • -
  • Optimize your network design and configuration using Atoll Forsk's in-built automation and customization capabilities or external tools.
  • -
  • Validate your network design and optimization results using Atoll Forsk's new Live module that allows integrating live network measurement data such as KPIs and UE/MDT (minimization of drive tests).
  • -
- -

By following these tips, you can use Atoll Forsk 3.2 crack effectively and efficiently for your wireless network design and optimization projects.

- -

Conclusion

- -

Atoll Forsk 3.2 crack is a software that allows you to design and optimize wireless networks of various technologies and standards. It is one of the most popular and powerful tools for radio network planning and analysis. However, finding a copy of this software is not easy, as it requires a license key to activate and use its full features and capabilities. The license key is usually provided by the software developer or distributor when you purchase the software legally. However, some people may not be able or willing to pay for the software, and may look for alternative ways to get it for free.

- -

One of the common ways to get Atoll Forsk for free is to use a crack. A crack is a modified version of the software that bypasses or removes the license key verification process. By using a crack, you can install and run Atoll Forsk without entering a valid license key. However, using a crack is not legal or ethical, as it violates the terms of service of the software and infringes on the rights of its creators. Moreover, using a crack may expose you to various risks and problems, such as malware infection, data loss, system damage, or legal consequences.

- -

In this article, we have shown you where to find Atoll Forsk 3.2 crack and how to download and use it safely and legally. We have also given you some tips on how to use Atoll Forsk 3.2 crack effectively and efficiently for your wireless network design and optimization projects. We hope that this article has been helpful and informative for you.

- -

If you are interested in using Atoll Forsk 3.2 crack for your wireless network design and optimization projects -

projects, you need to be aware of the risks and consequences of using a crack. You also need to respect the rights and efforts of the software developers and distributors who have created and provided this software legally. Therefore, we recommend that you purchase a legal license key for Atoll Forsk and use it according to the terms of service of the software.

- -

Benefits of Using Atoll Forsk 3.2 Crack

- -

Despite the risks and consequences of using a crack, some people may still prefer to use Atoll Forsk 3.2 crack for their wireless network design and optimization projects. This is because using a crack may have some benefits that may outweigh the drawbacks for some users.

- -

Here are some of the benefits of using Atoll Forsk 3.2 crack:

- -
    -
  • You can save money by not paying for the software license key.
  • -
  • You can access and use all the features and capabilities of Atoll Forsk without any limitations or restrictions.
  • -
  • You can use Atoll Forsk on multiple computers or devices without any activation or verification issues.
  • -
  • You can update or upgrade Atoll Forsk without any compatibility or functionality problems.
  • -
  • You can share or distribute Atoll Forsk with other users or colleagues without any legal or ethical issues.
  • -
- -

These benefits may seem attractive and tempting for some users who want to use Atoll Forsk 3.2 crack for their wireless network design and optimization projects. However, these benefits may not last long or may not be worth the risks and consequences that may arise from using a crack. Therefore, you need to weigh the pros and cons carefully before deciding to use Atoll Forsk 3.2 crack.

- -

Alternatives to Atoll Forsk 3.2 Crack

- -

If you are looking for a software that can help you with your wireless network design and optimization projects, but you do not want to use Atoll Forsk 3.2 crack or pay for a legal license key, you may want to consider some alternatives that are available online.

- -

There are some free or open source software that can perform similar functions as Atoll Forsk, such as:

- -
    -
  • Q-Rap: A free and open source software for radio network planning and analysis that supports various technologies and standards, such as GSM, UMTS, LTE, Wi-Fi, WiMAX, etc. It has features such as terrain analysis, propagation modeling, coverage prediction, interference analysis, capacity planning, etc. You can download Q-Rap here: https://sourceforge.net/projects/q-rap/.
  • -
  • Radiomobile: A free software for radio network planning and analysis that supports various technologies and standards, such as VHF, UHF, SHF, etc. It has features such as terrain analysis, propagation modeling, coverage prediction, interference analysis, link budget calculation, etc. You can download Radiomobile here: http://www.cplus.org/rmw/english1.html.
  • -
  • Splat!: A free software for radio network planning and analysis that supports various technologies and standards, such as VHF, UHF, SHF, etc. It has features such as terrain analysis -

    Conclusion

    - -

    Atoll Forsk 3.2 crack is a software that allows you to design and optimize wireless networks of various technologies and standards. It is one of the most popular and powerful tools for radio network planning and analysis. However, finding a copy of this software is not easy, as it requires a license key to activate and use its full features and capabilities. The license key is usually provided by the software developer or distributor when you purchase the software legally. However, some people may not be able or willing to pay for the software, and may look for alternative ways to get it for free.

    - -

    One of the common ways to get Atoll Forsk for free is to use a crack. A crack is a modified version of the software that bypasses or removes the license key verification process. By using a crack, you can install and run Atoll Forsk without entering a valid license key. However, using a crack is not legal or ethical, as it violates the terms of service of the software and infringes on the rights of its creators. Moreover, using a crack may expose you to various risks and problems, such as malware infection, data loss, system damage, or legal consequences.

    - -

    In this article, we have shown you where to find Atoll Forsk 3.2 crack and how to download and use it safely and legally. We have also given you some tips on how to use Atoll Forsk 3.2 crack effectively and efficiently for your wireless network design and optimization projects. We have also discussed some of the benefits and drawbacks of using Atoll Forsk 3.2 crack, as well as some alternatives that are available online.

    - -

    If you are interested in using Atoll Forsk 3.2 crack for your wireless network design and optimization projects, we hope that this article has been helpful and informative for you. However, we also urge you to respect the rights and efforts of the software developers and distributors who have created and provided this software legally. Therefore, we recommend that you purchase a legal license key for Atoll Forsk and use it according to the terms of service of the software.

    - -

    Thank you for reading this article. If you have any questions or comments, please feel free to contact us.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training UPDATED.md b/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training UPDATED.md deleted file mode 100644 index a27255f5997a708c301dddb30654bf3ec8d1e9c8..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training UPDATED.md +++ /dev/null @@ -1,115 +0,0 @@ - -

    FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training: How to Master the Best Tools for Flight Planning and Navigation

    - -

    If you are a pilot, a flight instructor, or a flight student, you know how important it is to have the best tools for flight planning and navigation. You need to have accurate and reliable information on weather, airspace, airports, routes, charts, and more. You also need to have a user-friendly and intuitive interface that allows you to access and manage all this information with ease.

    - -

    That's why you need FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. This is a comprehensive and up-to-date training course that teaches you how to use Jeppesen's FliteStar, JeppView, and FliteDeck products. These are the leading software applications for flight planning and navigation that are used by thousands of pilots around the world.

    -

    FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training


    DOWNLOAD 🆗 https://bytlly.com/2uGvX7



    - -

    In this article, we will cover everything you need to know about FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. We will explain what these products are, what they can do for you, and how you can download them. We will also give you some tips and best practices on how to use them effectively and efficiently.

    - -

    What are FliteStar, JeppView, and FliteDeck?

    - -

    FliteStar, JeppView, and FliteDeck are software applications that are designed to help pilots with flight planning and navigation. They are developed by Jeppesen, a company that has been providing aviation solutions for over 80 years. Here is a brief overview of each product:

    - -
      -
    • FliteStar: This is a flight planning software that allows you to create and edit flight plans on your computer. You can enter your departure and destination airports, your aircraft type and performance data, your preferred routes and altitudes, your fuel requirements, and more. FliteStar will then calculate the optimal flight plan for you based on the current weather, airspace, and airport data. You can also view your flight plan on various charts and maps, such as enroute charts, approach charts, airport diagrams, etc.
    • -
    • JeppView: This is a chart viewing software that allows you to access and print electronic versions of Jeppesen's charts on your computer. You can view over 16,000 charts from over 200 countries, including enroute charts, approach charts, airport diagrams, etc. You can also customize your charts with annotations, highlights, notes, etc.
    • -
    • FliteDeck: This is a chart display software that allows you to view electronic versions of Jeppesen's charts on your tablet or laptop during flight. You can view your current position and track on the charts using GPS or other navigation sources. You can also access various features such as moving map, terrain awareness, night mode, etc.
    • -
    - -

    What are the Benefits of FliteStar, JeppView, and FliteDeck?

    - -

    FliteStar, JeppView, and FliteDeck offer many benefits for pilots who want to plan and execute their flights with ease and confidence. Some of them are:

    - -
      -
    • Accuracy: FliteStar, JeppView, and FliteDeck use the most accurate and up-to-date data from Jeppesen's database. They also update automatically with the latest weather, airspace, airport,

      -
    • -
    • Convenience: FliteStar, JeppView, and FliteDeck allow you to plan and execute your flights with ease and speed. You can create and edit your flight plan on your computer, print or transfer your charts to your tablet or laptop, and view them during flight. You can also access various features and functions with a few clicks or taps.
    • -
    • Flexibility: FliteStar, JeppView, and FliteDeck allow you to customize and optimize your flight plan and charts according to your preferences and needs. You can choose from different chart types, scales, formats, colors, etc. You can also modify your flight plan and charts in case of any changes or contingencies.
    • -
    • Confidence: FliteStar, JeppView, and FliteDeck give you the confidence and peace of mind that you are flying with the best tools available. You can rely on their accuracy, reliability, and usability. You can also enhance your situational awareness and safety with their features such as terrain awareness, night mode, etc.
    • -
    - -

    How to Download FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training?

    - -

    If you want to download FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training, you need to have a subscription to Jeppesen's services. You can choose from different subscription plans that suit your needs and budget. You can also get a free trial for a limited time.

    -

    - -

    To download FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training, you need to follow these steps:

    - -
      -
    1. Go to Jeppesen's website and log in with your username and password.
    2. -
    3. Go to the Downloads section and select the products that you want to download: FliteStar, JeppView, or FliteDeck.
    4. -
    5. Select the regions and cycles that you want to download.
    6. -
    7. Click on the Download button and wait for the download to complete.
    8. -
    9. Install the products on your computer or transfer them to your tablet or laptop.
    10. -
    - -

    You can also watch this video tutorial that shows you how to download FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training:

    - - - -

    Conclusion

    - -

    FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training is a comprehensive and up-to-date training course that teaches you how to use Jeppesen's FliteStar, JeppView, and FliteDeck products. These are the leading software applications for flight planning and navigation that are used by thousands of pilots around the world.

    - -

    By downloading FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training, you will learn how to use these products effectively and efficiently. You will also enjoy their benefits such as accuracy, convenience, flexibility, and confidence.

    - -

    FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training is the ultimate training course for anyone who wants to master the best tools for flight planning and navigation. Download it today and become a better pilot!

    -

    FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training: How to Master the Best Tools for Flight Planning and Navigation

    - -

    If you are a pilot, a flight instructor, or a flight student, you know how important it is to have the best tools for flight planning and navigation. You need to have accurate and reliable information on weather, airspace, airports, routes, charts, and more. You also need to have a user-friendly and intuitive interface that allows you to access and manage all this information with ease.

    - -

    That's why you need FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. This is a comprehensive and up-to-date training course that teaches you how to use Jeppesen's FliteStar, JeppView, and FliteDeck products. These are the leading software applications for flight planning and navigation that are used by thousands of pilots around the world.

    - -

    In this article, we will cover everything you need to know about FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. We will explain what these products are, what they can do for you, and how you can download them. We will also give you some tips and best practices on how to use them effectively and efficiently.

    - -

    What are FliteStar, JeppView, and FliteDeck?

    - -

    FliteStar, JeppView, and FliteDeck are software applications that are designed to help pilots with flight planning and navigation. They are developed by Jeppesen, a company that has been providing aviation solutions for over 80 years. Here is a brief overview of each product:

    - -
      -
    • FliteStar: This is a flight planning software that allows you to create and edit flight plans on your computer. You can enter your departure and destination airports, your aircraft type and performance data, your preferred routes and altitudes, your fuel requirements, and more. FliteStar will then calculate the optimal flight plan for you based on the current weather, airspace, and airport data. You can also view your flight plan on various charts and maps, such as enroute charts, approach charts, airport diagrams, etc.
    • -
    • JeppView: This is a chart viewing software that allows you to access and print electronic versions of Jeppesen's charts on your computer. You can view over 16,000 charts from over 200 countries, including enroute charts, approach charts, airport diagrams, etc. You can also customize your charts with annotations, highlights, notes, etc.
    • -
    • FliteDeck: This is a chart display software that allows you to view electronic versions of Jeppesen's charts on your tablet or laptop during flight. You can view your current position and track on the charts using GPS or other navigation sources. You can also access various features such as moving map, terrain awareness, night mode, etc.
    • -
    - -

    What are the Benefits of FliteStar, JeppView, and FliteDeck?

    - -

    FliteStar, JeppView, and FliteDeck offer many benefits for pilots who want to plan and execute their flights with ease and confidence. Some of them are:

    - -
      -
    • Accuracy: FliteStar, JeppView, and FliteDeck use the most accurate and up-to-date data from Jeppesen's database. They also update automatically with the latest weather, -

      FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training: How to Master the Best Tools for Flight Planning and Navigation

      - -

      If you are a pilot, a flight instructor, or a flight student, you know how important it is to have the best tools for flight planning and navigation. You need to have accurate and reliable information on weather, airspace, airports, routes, charts, and more. You also need to have a user-friendly and intuitive interface that allows you to access and manage all this information with ease.

      - -

      That's why you need FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. This is a comprehensive and up-to-date training course that teaches you how to use Jeppesen's FliteStar, JeppView, and FliteDeck products. These are the leading software applications for flight planning and navigation that are used by thousands of pilots around the world.

      - -

      In this article, we will cover everything you need to know about FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training. We will explain what these products are, what they can do for you, and how you can download them. We will also give you some tips and best practices on how to use them effectively and efficiently.

      - -

      What are FliteStar, JeppView, and FliteDeck?

      - -

      FliteStar, JeppView, and FliteDeck are software applications that are designed to help pilots with flight planning and navigation. They are developed by Jeppesen, a company that has been providing aviation solutions for over 80 years. Here is a brief overview of each product:

      - -
        -
      • FliteStar: This is a flight planning software that allows you to create and edit flight plans on your computer. You can enter your departure and destination airports, your aircraft type and performance data, your preferred routes and altitudes, your fuel requirements, and more. FliteStar will then calculate the optimal flight plan for you based on the current weather, airspace, and airport data. You can also view your flight plan on various charts and maps, such as enroute charts, approach charts, airport diagrams, etc.
      • -
      • JeppView: This is a chart viewing software that allows you to access and print electronic versions of Jeppesen's charts on your computer. You can view over 16,000 charts from over 200 countries, including enroute charts, approach charts, airport diagrams, etc. You can also customize your charts with annotations, highlights, notes, etc.
      • -
      • FliteDeck: This is a chart display software that allows you to view electronic versions of Jeppesen's charts on your tablet or laptop during flight. You can view your current position and track on the charts using GPS or other navigation sources. You can also access various features such as moving map, terrain awareness, night mode, etc.
      • -
      - -

      What are the Benefits of FliteStar, JeppView, and FliteDeck?

      - -

      FliteStar, JeppView, and FliteDeck offer many benefits for pilots who want to plan and execute their flights with ease and confidence. Some of them are:

      - -
        -
      • Accuracy: FliteStar, JeppView, -

        Conclusion

        - -

        FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training is a comprehensive and up-to-date training course that teaches you how to use Jeppesen's FliteStar, JeppView, and FliteDeck products. These are the leading software applications for flight planning and navigation that are used by thousands of pilots around the world.

        - -

        By downloading FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training, you will learn how to use these products effectively and efficiently. You will also enjoy their benefits such as accuracy, convenience, flexibility, and confidence.

        - -

        FULL Jeppesen FliteStar V941 JeppView V361 FliteDeck Chart Training is the ultimate training course for anyone who wants to master the best tools for flight planning and navigation. Download it today and become a better pilot!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/face3d/util/generate_list.py b/spaces/lithiumice/SadTalker/src/face3d/util/generate_list.py deleted file mode 100644 index 943d906781063c3584a7e5b5c784f8aac0694985..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/util/generate_list.py +++ /dev/null @@ -1,34 +0,0 @@ -"""This script is to generate training list files for Deep3DFaceRecon_pytorch -""" - -import os - -# save path to training data -def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''): - save_path = os.path.join(save_folder, mode) - if not os.path.isdir(save_path): - os.makedirs(save_path) - with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in lms_list]) - - with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in imgs_list]) - - with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in msks_list]) - -# check if the path is valid -def check_list(rlms_list, rimgs_list, rmsks_list): - lms_list, imgs_list, msks_list = [], [], [] - for i in range(len(rlms_list)): - flag = 'false' - lm_path = rlms_list[i] - im_path = rimgs_list[i] - msk_path = rmsks_list[i] - if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path): - flag = 'true' - lms_list.append(rlms_list[i]) - imgs_list.append(rimgs_list[i]) - msks_list.append(rmsks_list[i]) - print(i, rlms_list[i], flag) - return lms_list, imgs_list, msks_list diff --git a/spaces/louiszhuang/pony/app.py b/spaces/louiszhuang/pony/app.py deleted file mode 100644 index d6f68c57bbddce5b8c459cab4b1e85296491969b..0000000000000000000000000000000000000000 --- a/spaces/louiszhuang/pony/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd - -dataframe = pd.DataFrame( - np.random.randn(10, 20), - columns=('col %d' % i for i in range(20))) - -st.dataframe(dataframe.style.highlight_max(axis=0)) -chart_data = pd.DataFrame( - np.random.randn(20, 3), - columns=['a', 'b', 'c']) - -st.line_chart(chart_data) -map_data = pd.DataFrame( - np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4], - columns=['lat', 'lon']) - -st.map(map_data) -st.text_input("Your name", key="name") - -# You can access the value at any point with: -st.session_state.name -df = pd.DataFrame({ - 'first column': [1, 2, 3, 4], - 'second column': [10, 20, 30, 40] -}) - -option = st.selectbox( - 'Which number do you like best?', - df['first column']) - -'You selected: ', option diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/catch.cpp b/spaces/ma-xu/LIVE/pybind11/tests/test_embed/catch.cpp deleted file mode 100644 index dd137385cb32250b8640169934fb96aa5e80f069..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/catch.cpp +++ /dev/null @@ -1,22 +0,0 @@ -// The Catch implementation is compiled here. This is a standalone -// translation unit to avoid recompiling it for every test change. - -#include - -#ifdef _MSC_VER -// Silence MSVC C++17 deprecation warning from Catch regarding std::uncaught_exceptions (up to catch -// 2.0.1; this should be fixed in the next catch release after 2.0.1). -# pragma warning(disable: 4996) -#endif - -#define CATCH_CONFIG_RUNNER -#include - -namespace py = pybind11; - -int main(int argc, char *argv[]) { - py::scoped_interpreter guard{}; - auto result = Catch::Session().run(argc, argv); - - return result < 0xff ? result : 0xff; -} diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/options/__init__.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/options/__init__.py deleted file mode 100644 index 59e481eb93dda48c81e04dd491cd3c9190c8eeb4..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/options/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. diff --git a/spaces/marianna13/search-inside-a-video/app.py b/spaces/marianna13/search-inside-a-video/app.py deleted file mode 100644 index 565f8d6c36621e5083bebcf1ba39a1a96699f5eb..0000000000000000000000000000000000000000 --- a/spaces/marianna13/search-inside-a-video/app.py +++ /dev/null @@ -1,166 +0,0 @@ -import gradio as gr -import yt_dlp -import os -import time -import torch -import transformers -import clip -import numpy as np -import cv2 -import random -from PIL import Image -from multilingual_clip import pt_multilingual_clip - - - -class SearchVideo: - - def __init__( - self, - clip_model: str, - text_model: str, - tokenizer, - compose, - ) -> None: - """ - clip_model: CLIP model to use for image embeddings - text_model: text encoder model - """ - self.text_model = text_model - self.tokenizer = tokenizer - self.clip_model = clip_model - self.compose = compose - self.device = "cuda" if torch.cuda.is_available() else "cpu" - - - def __call__(self, video: str, text: str) -> list: - torch.cuda.empty_cache() - img_list = [] - text_list = [] - frames = self.video2frames_ffmpeg(video) - - - img_embs = self.get_img_embs(frames) - txt_emb = self.get_txt_embs(text) - # txt_emb = [[t]*len(frames) for t in txt_emb] - txt_emb = txt_emb*len(frames) - - logits_per_image = self.compare_embeddings(img_embs, txt_emb) - logits_per_image = [logit.numpy()[0] for logit in logits_per_image] - ind = np.argmax(logits_per_image) - seg_path = self.extract_seg(video, ind) - return ind, seg_path, frames[ind] - - - def extract_seg(self, video:str, start:int): - start = start if start > 5 else start-5 - start = time.strftime('%H:%M:%S', time.gmtime(start)) - cmd = f'ffmpeg -ss {start} -i "{video}" -t 00:00:02 -vcodec copy -acodec copy -y segment_{start}.mp4' - os.system(cmd) - return f'segment_{start}.mp4' - - def video2frames_ffmpeg(self, video: str) -> list: - frames_dir = 'frames' - if not os.path.exists(frames_dir): - os.makedirs(frames_dir) - - select = "select='if(eq(n\,0),1,floor(t)-floor(prev_selected_t))'" - os.system(f'ffmpeg -i {video} -r 1 {frames_dir}/output-%04d.jpg') - - images = [Image.open(f'{frames_dir}/{f}') for f in sorted(os.listdir(frames_dir))] - os.system(f'rm -rf {frames_dir}') - return images - - def video2frames(self, video: str) -> list: - cap = cv2.VideoCapture(video) - num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - images = [] - frames_sec = [i for i in range(0, num_frames, 24*1)] - has_frames,image = cap.read() - frame_count = 0 - while has_frames: - has_frames,image = cap.read() - frame_count += 1 - if has_frames: - if frame_count in frames_sec: - image = Image.fromarray(image) - images.append(image) - return images - - def get_img_embs(self, img_list: list) -> list: - """ - takes list of image and calculates clip embeddings with model specified by clip_model - """ - img_input = torch.stack([self.compose(img).to(self.device) - for img in img_list]) - with torch.no_grad(): - image_embs = self.clip_model.encode_image(img_input).float().cpu() - return image_embs - - def get_txt_embs(self, text: str) -> torch.Tensor: - "calculates clip emebdding for the text " - with torch.no_grad(): - return self.text_model(text, self.tokenizer) - - def compare_embeddings(self, img_embs, txt_embs): - # normalized features - image_features = img_embs / img_embs.norm(dim=-1, keepdim=True) - text_features = txt_embs / txt_embs.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logits_per_image = [] - for image_feature in image_features: - logits_per_image.append(image_feature @ text_features.t()) - - return logits_per_image - -def download_yt_video(url): - ydl_opts = { - 'quiet': True, - "outtmpl": "%(id)s.%(ext)s", - 'format': 'bv*[height<=360][ext=mp4]+ba/b[height<=360] / wv*+ba/w' - } - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - return url.split('/')[-1].replace('watch?v=', '')+'.mp4' - - -clip_model='ViT-B/32' -text_model='M-CLIP/XLM-Roberta-Large-Vit-B-32' -clip_model, compose = clip.load(clip_model) -tokenizer = transformers.AutoTokenizer.from_pretrained(text_model) -text_model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(text_model) - -def search_video(video_url, text, video=None): - search = SearchVideo( - clip_model=clip_model, - text_model=text_model, - tokenizer=tokenizer, - compose=compose - ) - if video !=None: - video_url = None - if video_url: - video = download_yt_video(video_url) - ind, seg_path, img = search(video, text) - start = time.strftime('%H:%M:%S', time.gmtime(ind)) - return f'"{text}" found at {start}', seg_path - -title = '🔎🎞️🚀 Search inside a video' -description = '''Just enter a search query, a video URL or upload your video and get a 2-sec fragment from the video which is visually closest to you query.''' - -examples = [["https://www.youtube.com/watch?v=M93w3TjzVUE", "A dog"]] - -iface = gr.Interface( - search_video, - inputs=[gr.Textbox(value="https://www.youtube.com/watch?v=M93w3TjzVUE", label='Video URL'), gr.Textbox(value="a dog", label='Text query'), gr.Video()], - outputs=[gr.Textbox(label="Output"), gr.Video(label="Video segment")], - allow_flagging="never", - title=title, - description=description, - examples=examples - ) - -if __name__ == "__main__": - iface.launch(show_error=True) \ No newline at end of file diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/main_train_speaker_aware.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/main_train_speaker_aware.py deleted file mode 100644 index e264838411d114e46d301009dc848684f8656e7f..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/main_train_speaker_aware.py +++ /dev/null @@ -1,140 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import os, glob -import numpy as np -import cv2 -import argparse -import platform -import torch -from util.utils import try_mkdir -from approaches.train_speaker_aware import Speaker_aware_branch - - -if platform.release() == '4.4.0-83-generic': - ROOT_DIR = r'/mnt/ntfs/Dataset/TalkingToon/VoxCeleb2' -else: # 3.10.0-957.21.2.el7.x86_64 - ROOT_DIR = r'/mnt/nfs/work1/kalo/yangzhou/TalkingToon/VoxCeleb2' - -DEMO_CH = '' - -parser = argparse.ArgumentParser() -parser.add_argument('--nepoch', type=int, default=1001, help='number of epochs to train for') -parser.add_argument('--batch_size', type=int, default=1, help='batch size') -parser.add_argument('--in_batch_nepoch', type=int, default=1, help='') -parser.add_argument('--first_in_batch_nepoch', type=int, default=1, help='') -parser.add_argument('--segment_batch_size', type=int, default=512, help='batch size') -parser.add_argument('--num_window_frames', type=int, default=18, help='') -parser.add_argument('--num_window_frames_seq', type=int, default=18, help='') -parser.add_argument('--num_window_frames_sync', type=int, default=18, help='') -parser.add_argument('--num_window_step', type=int, default=1, help='') -parser.add_argument('--dump_dir', type=str, default='', help='') -parser.add_argument('--dump_file_name', type=str, default='celeb_normrot', help='') -parser.add_argument('--lr', type=float, default=1e-3, help='learning rate') -parser.add_argument('--reg_lr', type=float, default=1e-6, help='weight decay') -parser.add_argument('--drop_out', type=float, default=0, help='drop out') -parser.add_argument('--verbose', type=int, default=1, help='0 - detail, 2 - simplify') -parser.add_argument('--write', default=False, action='store_true') - -parser.add_argument('--add_pos', default=False, action='store_true') -parser.add_argument('--use_motion_loss', default=False, action='store_true') - - -parser.add_argument('--name', type=str, default='tmp') -parser.add_argument('--puppet_name', type=str, default=DEMO_CH) - -parser.add_argument('--in_size', type=int, default=80) - -parser.add_argument('--use_lip_weight', default=False, action='store_true') -parser.add_argument('--use_adain', default=False, action='store_true') -parser.add_argument('--use_residual', default=False, action='store_true') -parser.add_argument('--use_norm_emb', default=False, action='store_true') -parser.add_argument('--use_cycle_loss', default=False, action='store_true') -parser.add_argument('--lambda_cycle_loss', default=1.0, type=float) -parser.add_argument('--emb_coef', default=3.0, type=float) - -parser.add_argument('--freeze_content_emb', default=False, action='store_true') -parser.add_argument('--pretrain_g', default=False, action='store_true') - -parser.add_argument('--spk_emb_enc_size', default=16, type=int) -parser.add_argument('--c_enc_hidden_size', default=256, type=int) -parser.add_argument('--lstm_g_hidden_size', default=256, type=int) -parser.add_argument('--projection_size', default=512, type=int) - -parser.add_argument('--use_addinfo_format', default='motion_and_pos') -parser.add_argument('--l2_on_fls_without_traj', default=False, action='store_true') -parser.add_argument('--train_with_grad_penalty', default=False, action='store_true') -parser.add_argument('--train_DL', default=-1.0, type=float) -parser.add_argument('--train_DT', default=-1.0, type=float) -parser.add_argument('--train_G_only', default=False, action='store_true') -parser.add_argument('--lambda_mse_loss', default=1.0, type=float) -parser.add_argument('--teacher_force', default=0.0, type=float) -parser.add_argument('--debug_version', default='', type=str) -parser.add_argument('--lambda_add_info_loss', default=1.0, type=float) - - -parser.add_argument('--show_animation', default=False, action='store_true') - - - -# model -parser.add_argument('--pos_dim', default=7, type=int) -parser.add_argument('--use_prior_net', default=True, action='store_true') -parser.add_argument('--transformer_d_model', default=32, type=int) -parser.add_argument('--transformer_N', default=2, type=int) -parser.add_argument('--transformer_heads', default=2, type=int) -parser.add_argument('--load_a2l_C_name', type=str, default='MakeItTalk/examples/ckpt/ckpt_audio2landmark_c.pth') -parser.add_argument('--init_content_encoder', type=str, default='MakeItTalk/examples/ckpt/ckpt_audio2landmark_c.pth') # 'tt_lipwpre_prior_useclose/ckpt_last_epoch_20.pth') -parser.add_argument('--load_a2l_G_name', type=str, default='/mnt/ntfs/Dataset/TalkingToon/VoxCeleb2/ckpt/local_da_merge_3/ckpt_e_50.pth') # - - -# data -parser.add_argument('--use_11spk_only', default=True, action='store_true') - -# arch -parser.add_argument('--use_reg_as_std', default=True, action='store_false') -parser.add_argument('--lambda_laplacian_smooth_loss', default=1.0, type=float) - -# test -parser.add_argument('--test_emb', default=False, action='store_true') -parser.add_argument('--train', default=False, action='store_true') -parser.add_argument('--test_end2end', default=False, action='store_true') - -# save model -parser.add_argument('--jpg_freq', type=int, default=25, help='') -parser.add_argument('--ckpt_epoch_freq', type=int, default=25, help='') - -AMP = {'default':[2.5, 2.5, 1.0]} -if(DEMO_CH not in AMP.keys()): - AMP[DEMO_CH] = AMP['default'] - -parser.add_argument('--amp_lip_x', type=float, default=AMP[DEMO_CH][0]) -parser.add_argument('--amp_lip_y', type=float, default=AMP[DEMO_CH][1]) -parser.add_argument('--amp_pos', type=float, default=AMP[DEMO_CH][2]) - -opt_parser = parser.parse_args() - -root_dir = ROOT_DIR -opt_parser.root_dir = ROOT_DIR -opt_parser.dump_dir = os.path.join(root_dir, 'dump') -opt_parser.ckpt_dir = os.path.join(root_dir, 'ckpt', opt_parser.name) -try_mkdir(opt_parser.ckpt_dir) -opt_parser.log_dir = os.path.join(root_dir, 'log') - -# make directory for nn outputs -try_mkdir(opt_parser.dump_dir.replace('dump','nn_result')) -try_mkdir(os.path.join(opt_parser.dump_dir.replace('dump', 'nn_result'), opt_parser.name)) - - -model = Speaker_aware_branch(opt_parser) -if(opt_parser.train): - model.train() -else: - model.test() \ No newline at end of file diff --git a/spaces/mateuseap/magic-vocals/config.py b/spaces/mateuseap/magic-vocals/config.py deleted file mode 100644 index 5b72235b58b65ac629f49bcc4aad032b5b59d8d4..0000000000000000000000000000000000000000 --- a/spaces/mateuseap/magic-vocals/config.py +++ /dev/null @@ -1,204 +0,0 @@ -import argparse -import sys -import torch -import json -from multiprocessing import cpu_count - -global usefp16 -usefp16 = False - - -def use_fp32_config(): - usefp16 = False - device_capability = 0 - if torch.cuda.is_available(): - device = torch.device("cuda:0") # Assuming you have only one GPU (index 0). - device_capability = torch.cuda.get_device_capability(device)[0] - if device_capability >= 7: - usefp16 = True - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as d: - data = json.load(d) - - if "train" in data and "fp16_run" in data["train"]: - data["train"]["fp16_run"] = True - - with open(f"configs/{config_file}", "w") as d: - json.dump(data, d, indent=4) - - print(f"Set fp16_run to true in {config_file}") - - with open( - "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8" - ) as f: - strr = f.read() - - strr = strr.replace("3.0", "3.7") - - with open( - "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8" - ) as f: - f.write(strr) - else: - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - data = json.load(f) - - if "train" in data and "fp16_run" in data["train"]: - data["train"]["fp16_run"] = False - - with open(f"configs/{config_file}", "w") as d: - json.dump(data, d, indent=4) - - print(f"Set fp16_run to false in {config_file}") - - with open( - "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8" - ) as f: - strr = f.read() - - strr = strr.replace("3.7", "3.0") - - with open( - "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8" - ) as f: - f.write(strr) - else: - print( - "CUDA is not available. Make sure you have an NVIDIA GPU and CUDA installed." - ) - return (usefp16, device_capability) - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - self.paperspace, - self.is_cli, - ) = self.arg_parse() - - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - exe = sys.executable or "python" - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument("--pycmd", type=str, default=exe, help="Python command") - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument( # Fork Feature. Paperspace integration for web UI - "--paperspace", - action="store_true", - help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.", - ) - parser.add_argument( # Fork Feature. Embed a CLI into the infer-web.py - "--is_cli", - action="store_true", - help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.paperspace, - cmd_opts.is_cli, - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("Found GPU", self.gpu_name, ", force to fp32") - self.is_half = False - else: - print("Found GPU", self.gpu_name) - use_fp32_config() - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif self.has_mps(): - print("No supported Nvidia GPU found, use MPS instead") - self.device = "mps" - self.is_half = False - use_fp32_config() - else: - print("No supported Nvidia GPU found, use CPU instead") - self.device = "cpu" - self.is_half = False - use_fp32_config() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/matthoffner/chatbot/types/prompt.ts b/spaces/matthoffner/chatbot/types/prompt.ts deleted file mode 100644 index fb5c2ef5b02986f5a545ba75cdd1b6a04fc594a0..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/types/prompt.ts +++ /dev/null @@ -1,10 +0,0 @@ -import { OpenAIModel } from './openai'; - -export interface Prompt { - id: string; - name: string; - description: string; - content: string; - model: OpenAIModel; - folderId: string | null; -} diff --git a/spaces/merle/PROTEIN_GENERATOR/examples/loop_design.sh b/spaces/merle/PROTEIN_GENERATOR/examples/loop_design.sh deleted file mode 100644 index de6c21db8ed2bd4ab526e7c41e8fb7bd13f76754..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/examples/loop_design.sh +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/bash -#SBATCH -J seq_diff -#SBATCH -p gpu -#SBATCH --mem=8g -#SBATCH --gres=gpu:a6000:1 -#SBATCH -o ./out/slurm/slurm_%j.out - -source activate /software/conda/envs/SE3nv - -srun python ../inference.py \ - --num_designs 10 \ - --pdb pdbs/G12D_manual_mut.pdb \ - --out out/ab_loop \ - --contigs A2-176,0 C7-16,0 H2-95,12-15,H111-116,0 L1-45,10-12,L56-107 \ - --T 25 --save_best_plddt --loop_design diff --git a/spaces/merle/PROTEIN_GENERATOR/model/diffusion.py b/spaces/merle/PROTEIN_GENERATOR/model/diffusion.py deleted file mode 100644 index 8699f7d8acb3c30d0fed7991f29ab0f27b370861..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/model/diffusion.py +++ /dev/null @@ -1,217 +0,0 @@ -import enum -import math - -import numpy as np -import torch as th - - -########################################################################################## - -# DIFFUSION CODE BASE FOR PROTEIN SEQUENCE DIFFUSION WAS ADAPTED FROM LM-DIFFUSION # - - # (https://github.com/XiangLi1999/Diffusion-LM) # - -########################################################################################## - -class GaussianDiffusion_SEQDIFF: - """ - T = number of timesteps to set up diffuser with - - schedule = type of noise schedule to use linear, cosine, gaussian - - noise = type of ditribution to sample from; DEFAULT - normal_gaussian - - """ - - def __init__(self, - T=1000, - schedule='sqrt', - sample_distribution='normal', - sample_distribution_gmm_means=[-1.0, 1.0], - sample_distribution_gmm_variances=[1.0, 1.0], - F=1, - ): - - # Use float64 for accuracy. - betas = np.array(get_named_beta_schedule(schedule, T), dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - self.F = F - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = (betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)) - # log calculation clipped because the posterior variance is 0 at the - # beginning of the diffusion chain. - self.posterior_log_variance_clipped = np.log(np.append(self.posterior_variance[1], self.posterior_variance[1:])) - self.posterior_mean_coef1 = (betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod)) - self.posterior_mean_coef2 = ((1.0 - self.alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - self.alphas_cumprod)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - - # sample_distribution_params - self.sample_distribution = sample_distribution - self.sample_distribution_gmm_means = [float(mean) for mean in sample_distribution_gmm_means] - self.sample_distribution_gmm_variances = [float(variance) for variance in sample_distribution_gmm_variances] - - if self.sample_distribution == 'normal': - self.noise_function = th.randn_like - else: - self.noise_function = self.randnmixture_like - - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = ( - _extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - ) - variance = _extract(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract( - self.log_one_minus_alphas_cumprod, t, x_start.shape - ) - return mean, variance, log_variance - - def q_sample(self, x_start, t, mask=None, DEVICE=None): - """ - Diffuse the data for a given number of diffusion steps. - In other words, sample from q(x_t | x_0). - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - - # noise_function is determined in init depending on type of noise specified - noise = self.noise_function(x_start)*(self.F**2) - if DEVICE != None: - noise = noise.to(DEVICE) - - assert noise.shape == x_start.shape - x_sample = ( - _extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) - * noise) - - if mask is not None: - x_sample[mask]=x_start[mask] - - return x_sample - - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - q(x_{t-1} | x_t, x_0) - """ - assert x_start.shape == x_t.shape - - posterior_mean = (_extract(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract(self.posterior_mean_coef2, t, x_t.shape) * x_t) - - posterior_variance = _extract(self.posterior_variance, t, x_t.shape) - - posterior_log_variance_clipped = _extract(self.posterior_log_variance_clipped, t, x_t.shape) - - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - - def randnmixture_like(self, tensor_like, number_normal=3, weights_normal=None): - - if self.sample_distribution_gmm_means and self.sample_distribution_gmm_variances: - assert len(self.sample_distribution_gmm_means) == len(self.sample_distribution_gmm_variances) - - if not weights_normal: - mix = th.distributions.Categorical(th.ones(len(self.sample_distribution_gmm_means))) #number_normal - else: - assert len(weights_normal) == number_normal - mix = th.distributions.Categorical(weights_normal) - #comp = torch.distributions.Normal(torch.randn(number_normal), torch.rand(number_normal)) - comp = th.distributions.Normal(th.tensor(self.sample_distribution_gmm_means), th.tensor(self.sample_distribution_gmm_variances)) - #comp = torch.distributions.Normal([-3, 3], [1, 1]) - #comp = torch.distributions.Normal([-3, 0, 3], [1, 1, 1]) - #comp = torch.distributions.Normal([-3, 0, 3], [1, 1, 1]) - gmm = th.distributions.mixture_same_family.MixtureSameFamily(mix, comp) - return th.tensor([gmm.sample() for _ in range(np.prod(tensor_like.shape))]).reshape(tensor_like.shape) - - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - beta_start = scale * 0.0001 - beta_end = scale * 0.02 - return np.linspace(beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64) - - elif schedule_name == "cosine": - return betas_for_alpha_bar(num_diffusion_timesteps, lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2,) - - elif schedule_name == 'sqrt': - return betas_for_alpha_bar(num_diffusion_timesteps, lambda t: 1-np.sqrt(t + 0.0001),) - - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def _extract(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) diff --git a/spaces/merve/data-leak/index.html b/spaces/merve/data-leak/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
        -

        Welcome to your static Space!

        -

        - You can modify this app directly by editing index.html in the - Files and versions tab. -

        -

        - Also don't forget to check the - Spaces documentation. -

        -
        - - diff --git a/spaces/merve/data-leak/public/anonymization/make-sliders.js b/spaces/merve/data-leak/public/anonymization/make-sliders.js deleted file mode 100644 index 72f6dfd7c96d6c74cfb35db5854f06b668bf3d46..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/anonymization/make-sliders.js +++ /dev/null @@ -1,139 +0,0 @@ -window.makeSliders = function(){ - var rv = { - population: 144, - headsProb: .5, - } - - rv.updateHeadsProb = (headsProb) => { - rv.headsProb = headsProb - updateSliderPos() - - - estimates.updateEstimates() - estimates.render() - } - - rv.updatePopulation = (population) => { - rv.population = population - updateSliderPos() - - - var scale = d3.clamp(0, 13 / Math.sqrt(population), 1) - sel.studentGroup.st({ - transformOrigin: 'top', - transformOrigin: c.width/2 + 'px ' + 160 + 'px', - transform: `scale(${scale})` - }) - - estimates.updateEstimates() - estimates.render() - - sel.student.classed('inactive',(d, i) => i >= population) - } - - rv.updatePopulationSlider = (val) => { - rv.updatePopulation(val) - } - - rv.updateNoiseSlider = (val) => { - rv.updateHeadsProb(val) - } - - var updateSliderPos = (function(){ - var width = d3.clamp(50, window.innerWidth/2 - 40, 145) - var height = 30 - var color = '#007276' - - var sliderVals = { - population: { - key: 'population', - textFn: d => rv.population + ' students' , - r: [144, 756], - v: 144, - stepFn: d => rv.updatePopulation(Math.round(d.v/2)*2), - }, - headsProb: { - key: 'headsProb', - textFn: d => d3.format('.1%')(rv.headsProb) + ' chance of heads', - r: [.2, .5], - v: .5, - stepFn: d => rv.updateHeadsProb(d.v), - } - } - var sliders = [sliderVals.headsProb, sliderVals.population, sliderVals.headsProb] - sliders.forEach(d => { - d.s = d3.scaleLinear().domain(d.r).range([0, width]) - }) - - var sliderSel = d3.selectAll('.slide-container-population,.slide-container-heads-prob').html('') - .data(sliders) - .classed('slider', true) - .st({ - display: 'inline-block', - width: width, - paddingRight: (d, i) => i == 1 ? 40 : 0, - marginTop: 20, - }) - - var textSel = sliderSel.append('div.slider-label-container') - .st({marginBottom: -5}) - - var svgSel = sliderSel.append('svg').at({width, height}) - .on('click', function(d){ - d.v = d.s.invert(d3.mouse(this)[0]) - d.stepFn(d) - }) - .st({ - cursor: 'pointer' - }) - .append('g').translate(height/2, 1) - svgSel.append('rect').at({width, height, y: -height/2, fill: 'rgba(0,0,0,0)'}) - - svgSel.append('path').at({ - d: `M 0 -.5 H ${width}`, - stroke: color, - strokeWidth: 1 - }) - - var leftPathSel = svgSel.append('path').at({ - d: `M 0 -.5 H ${width}`, - stroke: color, - strokeWidth: 3 - }) - - - var drag = d3.drag() - .on('drag', function(d){ - var x = d3.mouse(this)[0] - d.v = d3.clamp(d3.min(d.r), d.s.invert(x), d3.max(d.r)) - d.stepFn(d) - }) - - var rectSel = svgSel.append('rect') - .at({ - width: height/2 - 1, - height: height/2 - 1, - stroke: color, - strokeWidth: 3, - fill: '#fff', - }) - .translate([-height/4, -height/4]) - .call(drag) - - return isDrag => { - rectSel.at({x: d => Math.round(d.s(rv[d.key]))}) - textSel.text(d => d.textFn(d)) - - leftPathSel.at({d: d => `M 0 -.5 H ${d.s(rv[d.key])}`}) - } - })() - updateSliderPos() - - - return rv -} - - - - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/merve/data-leak/public/uncertainty-calibration/draw_calibrationcurve.js b/spaces/merve/data-leak/public/uncertainty-calibration/draw_calibrationcurve.js deleted file mode 100644 index c7992a7c79b1a5187bc3f267350869904c836626..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/uncertainty-calibration/draw_calibrationcurve.js +++ /dev/null @@ -1,102 +0,0 @@ - -window.drawCalibrationCurve = function (graphSel, fig_height, fig_width){ - var width = Math.min(fig_height, fig_width) - var sel = graphSel - .append('div').st({textAlign: 'center'}) - .append('div').st({display: 'inline-block'}) - - var c = d3.conventions({ - sel, - width, - height: width, - margin: {top: 40} - }); - - c.svg.parent() - - //TODO(nthain) Who owns the buckets? We have at least 2 instances, reduce to 1 - var buckets = d3.pairs(window.weatherGraph.thresholds) - buckets.forEach(bucket => { - bucket.val = d3.mean(bucket, d => d.origVal) - }) - - c.xAxis.tickValues(buckets.map(d => d.val)).tickFormat(d3.format('.2f')) - c.yAxis.tickValues(buckets.map(d => d.val)).tickFormat(d3.format('.2f')) - d3.drawAxis(c) - window.util.ggPlotBg(c) - - window.util.addAxisLabel(c, 'Calibrated Model Score', 'Probability of Rain') - - var eceSel = c.svg.append('g.ece') - var eceBox = eceSel.append('rect.val-box') - .at({width: 55, height: 20, x: c.width/2 + 72.5, y: -35, rx: 3, ry: 3}) - var eceText = eceSel.append('text.big-text') - .at({y: -20, x: c.width/2-30, textAnchor: 'middle'}) - var eceVal = eceSel.append('text.val-text') - .at({y: -20, x: c.width/2+100, textAnchor: 'middle'}) - - c.svg.append('path') - .at({ - d: ['M', 0, c.height, 'L', c.width, 0].join(' '), - stroke: '#555', - strokeDasharray: '3 3', - }) - - var bucketSel = c.svg.appendMany('g.bucket', buckets) - - var circleSel = bucketSel.append('circle') - .at({fillOpacity: .4, fill: 'steelblue'}) - - var pathSel = bucketSel.append('path') - .at({stroke: 'steelblue', strokeWidth: 3}) - - var bucketText = bucketSel.append('text').text('8 / 10') - .at({textAnchor: 'start', dy: '.33em', fontSize: 10, fill: '#000'}) - - - // function remap_score(s) { - // // new_score = min_threshold_new + (old_score-min_threshold_old)(max_threshold_new-min_threshold_new)/(max_threshold_old-min_threshold_old) - // //find index less than score - // } - - function renderBuckets(){ - var filter_rain = window.slides.slide?.filter_rain - - buckets.forEach(bucket => { - bucket.data = weatherdata - .filter(d => bucket[0].val <= d.score && d.score <= bucket[1].val) - .filter(d => !filter_rain || !d.is_filter) - - bucket.nPositive = d3.sum(bucket.data, d => d.label) - bucket.percent = bucket.nPositive/bucket.data.length - - if (isNaN(bucket.percent)) bucket.percent = bucket[0].val - }) - - var ece = d3.sum(buckets, d => d.data.length*Math.abs(d.val - d.percent)) - ece = ece/d3.sum(buckets, d => d.data.length) - - eceText.text('Expected Calibration Error: ') - eceVal.text(d3.format('.3f')(ece)) - - var rScale = d3.scaleSqrt().domain([0, 50]).range([0, 20]) - - bucketSel - .st({opacity: d => d.data.length}) - .filter(d => d.data.length) - .translate(d => [c.x(d.val), c.y(d.percent)]) - - circleSel - .at({r: d => rScale(d.data.length)}) - - pathSel.at({d: d => 'M 0 0 V ' + (c.y(d.val) - c.y(d.percent))}) - - bucketText - .text(d => `${d.nPositive} / ${d.data.length}`) - .at({x: d => rScale(d.data.length) + 2}) - } - - return {renderBuckets, c, buckets, calibrationDataFn: () => console.log('test')} -} - -if (window.init) window.init() diff --git a/spaces/merve/uncertainty-calibration/source/third_party/topojson-client.js b/spaces/merve/uncertainty-calibration/source/third_party/topojson-client.js deleted file mode 100644 index 728070f185d11aa72b3f78ab88037275614fe89b..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/third_party/topojson-client.js +++ /dev/null @@ -1,2 +0,0 @@ -// https://github.com/topojson/topojson-client v3.0.1 Copyright 2019 Mike Bostock -!function(e,r){"object"==typeof exports&&"undefined"!=typeof module?r(exports):"function"==typeof define&&define.amd?define(["exports"],r):r((e=e||self).topojson=e.topojson||{})}(this,function(e){"use strict";function r(e){return e}function t(e){if(null==e)return r;var t,n,o=e.scale[0],a=e.scale[1],i=e.translate[0],c=e.translate[1];return function(e,r){r||(t=n=0);var u=2,f=e.length,s=new Array(f);for(s[0]=(t+=e[0])*o+i,s[1]=(n+=e[1])*a+c;ui&&(i=e[0]),e[1]c&&(c=e[1])}function f(e){switch(e.type){case"GeometryCollection":e.geometries.forEach(f);break;case"Point":u(e.coordinates);break;case"MultiPoint":e.coordinates.forEach(u)}}for(r in e.arcs.forEach(function(e){for(var r,t=-1,u=e.length;++ti&&(i=r[0]),r[1]c&&(c=r[1])}),e.objects)f(e.objects[r]);return[o,a,i,c]}function o(e,r){var t=r.id,n=r.bbox,o=null==r.properties?{}:r.properties,i=a(e,r);return null==t&&null==n?{type:"Feature",properties:o,geometry:i}:null==n?{type:"Feature",id:t,properties:o,geometry:i}:{type:"Feature",id:t,bbox:n,properties:o,geometry:i}}function a(e,r){var n=t(e.transform),o=e.arcs;function a(e,r){r.length&&r.pop();for(var t=o[e<0?~e:e],a=0,i=t.length;a1)n=function(e,r,t){var n,o=[],a=[];function i(e){var r=e<0?~e:e;(a[r]||(a[r]=[])).push({i:e,g:n})}function c(e){e.forEach(i)}function u(e){e.forEach(c)}return function e(r){switch(n=r,r.type){case"GeometryCollection":r.geometries.forEach(e);break;case"LineString":c(r.arcs);break;case"MultiLineString":case"Polygon":u(r.arcs);break;case"MultiPolygon":!function(e){e.forEach(u)}(r.arcs)}}(r),a.forEach(null==t?function(e){o.push(e[0].i)}:function(e){t(e[0].g,e[e.length-1].g)&&o.push(e[0].i)}),o}(0,r,t);else for(o=0,n=new Array(a=e.arcs.length);o1)for(var a,c,f=1,s=u(o[0]);fs&&(c=o[0],o[0]=o[f],o[f]=c,s=a);return o}).filter(function(e){return e.length>0})}}function f(e,r){for(var t=0,n=e.length;t>>1;e[o]=2))throw new Error("n must be ≥2");var t,o=(u=e.bbox||n(e))[0],a=u[1],i=u[2],c=u[3];r={scale:[i-o?(i-o)/(t-1):1,c-a?(c-a)/(t-1):1],translate:[o,a]}}var u,f,l=s(r),h=e.objects,p={};function g(e){return l(e)}function y(e){var r;switch(e.type){case"GeometryCollection":r={type:"GeometryCollection",geometries:e.geometries.map(y)};break;case"Point":r={type:"Point",coordinates:g(e.coordinates)};break;case"MultiPoint":r={type:"MultiPoint",coordinates:e.coordinates.map(g)};break;default:return e}return null!=e.id&&(r.id=e.id),null!=e.bbox&&(r.bbox=e.bbox),null!=e.properties&&(r.properties=e.properties),r}for(f in h)p[f]=y(h[f]);return{type:"Topology",bbox:u,transform:r,objects:p,arcs:e.arcs.map(function(e){var r,t=0,n=1,o=e.length,a=new Array(o);for(a[0]=l(e[0],0);++t= 200): - continue - text_data = [] - flag = False - with cs.open(pjoin(opt.text_dir, name + '.txt')) as f: - for line in f.readlines(): - text_dict = {} - line_split = line.strip().split('#') - caption = line_split[0] - tokens = line_split[1].split(' ') - f_tag = float(line_split[2]) - to_tag = float(line_split[3]) - f_tag = 0.0 if np.isnan(f_tag) else f_tag - to_tag = 0.0 if np.isnan(to_tag) else to_tag - - text_dict['caption'] = caption - text_dict['tokens'] = tokens - if f_tag == 0.0 and to_tag == 0.0: - flag = True - text_data.append(text_dict) - else: - n_motion = motion[int(f_tag*20) : int(to_tag*20)] - if (len(n_motion)) < min_motion_len or (len(n_motion) >= 200): - continue - new_name = random.choice('ABCDEFGHIJKLMNOPQRSTUVW') + '_' + name - while new_name in data_dict: - new_name = random.choice('ABCDEFGHIJKLMNOPQRSTUVW') + '_' + name - data_dict[new_name] = {'motion': n_motion, - 'length': len(n_motion), - 'text':[text_dict]} - new_name_list.append(new_name) - length_list.append(len(n_motion)) - - if flag: - data_dict[name] = {'motion': motion, - 'length': len(motion), - 'text':text_data} - new_name_list.append(name) - length_list.append(len(motion)) - except: - # Some motion may not exist in KIT dataset - pass - - - name_list, length_list = zip(*sorted(zip(new_name_list, length_list), key=lambda x: x[1])) - - if opt.is_train: - # root_rot_velocity (B, seq_len, 1) - std[0:1] = std[0:1] / opt.feat_bias - # root_linear_velocity (B, seq_len, 2) - std[1:3] = std[1:3] / opt.feat_bias - # root_y (B, seq_len, 1) - std[3:4] = std[3:4] / opt.feat_bias - # ric_data (B, seq_len, (joint_num - 1)*3) - std[4: 4 + (joints_num - 1) * 3] = std[4: 4 + (joints_num - 1) * 3] / 1.0 - # rot_data (B, seq_len, (joint_num - 1)*6) - std[4 + (joints_num - 1) * 3: 4 + (joints_num - 1) * 9] = std[4 + (joints_num - 1) * 3: 4 + ( - joints_num - 1) * 9] / 1.0 - # local_velocity (B, seq_len, joint_num*3) - std[4 + (joints_num - 1) * 9: 4 + (joints_num - 1) * 9 + joints_num * 3] = std[ - 4 + (joints_num - 1) * 9: 4 + ( - joints_num - 1) * 9 + joints_num * 3] / 1.0 - # foot contact (B, seq_len, 4) - std[4 + (joints_num - 1) * 9 + joints_num * 3:] = std[ - 4 + (joints_num - 1) * 9 + joints_num * 3:] / opt.feat_bias - - assert 4 + (joints_num - 1) * 9 + joints_num * 3 + 4 == mean.shape[-1] - np.save(pjoin(opt.meta_dir, 'mean.npy'), mean) - np.save(pjoin(opt.meta_dir, 'std.npy'), std) - - self.mean = mean - self.std = std - self.length_arr = np.array(length_list) - self.data_dict = data_dict - self.name_list = name_list - - def inv_transform(self, data): - return data * self.std + self.mean - - def real_len(self): - return len(self.data_dict) - - def __len__(self): - return self.real_len() * self.times - - def __getitem__(self, item): - idx = item % self.real_len() - data = self.data_dict[self.name_list[idx]] - motion, m_length, text_list = data['motion'], data['length'], data['text'] - # Randomly select a caption - text_data = random.choice(text_list) - caption = text_data['caption'] - - max_motion_length = self.opt.max_motion_length - if m_length >= self.opt.max_motion_length: - idx = random.randint(0, len(motion) - max_motion_length) - motion = motion[idx: idx + max_motion_length] - else: - padding_len = max_motion_length - m_length - D = motion.shape[1] - padding_zeros = np.zeros((padding_len, D)) - motion = np.concatenate((motion, padding_zeros), axis=0) - - assert len(motion) == max_motion_length - "Z Normalization" - motion = (motion - self.mean) / self.std - - if self.eval_mode: - tokens = text_data['tokens'] - if len(tokens) < self.opt.max_text_len: - # pad with "unk" - tokens = ['sos/OTHER'] + tokens + ['eos/OTHER'] - sent_len = len(tokens) - tokens = tokens + ['unk/OTHER'] * (self.opt.max_text_len + 2 - sent_len) - else: - # crop - tokens = tokens[:self.opt.max_text_len] - tokens = ['sos/OTHER'] + tokens + ['eos/OTHER'] - sent_len = len(tokens) - pos_one_hots = [] - word_embeddings = [] - for token in tokens: - word_emb, pos_oh = self.w_vectorizer[token] - pos_one_hots.append(pos_oh[None, :]) - word_embeddings.append(word_emb[None, :]) - pos_one_hots = np.concatenate(pos_one_hots, axis=0) - word_embeddings = np.concatenate(word_embeddings, axis=0) - return word_embeddings, pos_one_hots, caption, sent_len, motion, m_length - return caption, motion, m_length diff --git a/spaces/mithril-security/blind_chat/src/lib/types/SharedConversation.ts b/spaces/mithril-security/blind_chat/src/lib/types/SharedConversation.ts deleted file mode 100644 index e8981ed83a8871ef49fa539a14cb1ebfca599ea0..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/types/SharedConversation.ts +++ /dev/null @@ -1,12 +0,0 @@ -import type { Message } from "./Message"; -import type { Timestamps } from "./Timestamps"; - -export interface SharedConversation extends Timestamps { - _id: string; - - hash: string; - - model: string; - title: string; - messages: Message[]; -} diff --git a/spaces/mms-meta/MMS/uroman/lib/NLP/utilities.pm b/spaces/mms-meta/MMS/uroman/lib/NLP/utilities.pm deleted file mode 100644 index 7be117449190533d826bd63b9266c1434d00408f..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/lib/NLP/utilities.pm +++ /dev/null @@ -1,3652 +0,0 @@ -################################################################ -# # -# utilities # -# # -################################################################ - -package NLP::utilities; - -use File::Spec; -use Time::HiRes qw(time); -use Time::Local; -use NLP::English; -use NLP::UTF8; - -$utf8 = NLP::UTF8; -$englishPM = NLP::English; - -%empty_ht = (); - -use constant DEBUGGING => 0; - -sub member { - local($this,$elem,@array) = @_; - - my $a; - if (defined($elem)) { - foreach $a (@array) { - if (defined($a)) { - return 1 if $elem eq $a; - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::member::a\n"; - } - } - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::member::elem\n"; - } - return 0; -} - -sub dual_member { - local($this,$elem1,$elem2,*array1,*array2) = @_; - # returns 1 if there exists a position $n - # such that $elem1 occurs at position $n in @array1 - # and $elem2 occurs at same position $n in @array2 - - return 0 unless defined($elem1) && defined($elem2); - my $last_index = ($#array1 < $#array2) ? $#array1 : $#array2; #min - my $a; - my $b; - foreach $i ((0 .. $last_index)) { - return 1 if defined($a = $array1[$i]) && defined($b = $array2[$i]) && ($a eq $elem1) && ($b eq $elem2); - } - return 0; -} - -sub sorted_list_equal { - local($this,*list1,*list2) = @_; - - return 0 unless $#list1 == $#list2; - foreach $i ((0 .. $#list1)) { - return 0 unless $list1[$i] eq $list2[$i]; - } - return 1; -} - -sub trim { - local($this, $s) = @_; - - $s =~ s/^\s*//; - $s =~ s/\s*$//; - $s =~ s/\s+/ /g; - return $s; -} - -sub trim2 { - local($this, $s) = @_; - - $s =~ s/^\s*//; - $s =~ s/\s*$//; - return $s; -} - -sub trim_left { - local($this, $s) = @_; - $s =~ s/^\s*//; - return $s; -} - -sub cap_member { - local($this,$elem,@array) = @_; - - my $a; - my $lc_elem = lc $elem; - foreach $a (@array) { - return $a if $lc_elem eq lc $a; - } - return ""; -} - -sub remove_elem { - local($this,$elem,@array) = @_; - - return @array unless $this->member($elem, @array); - @rm_list = (); - foreach $a (@array) { - push(@rm_list, $a) unless $elem eq $a; - } - return @rm_list; -} - -sub intersect_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - if (defined($elem1)) { - foreach $elem2 (@list2) { - if (defined($elem2)) { - return 1 if $elem1 eq $elem2; - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem2\n"; - } - } - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem1\n"; - } - } - return 0; -} - -sub intersect_expl_p { - local($this,*list1,@list2) = @_; - - foreach $elem1 (@list1) { - foreach $elem2 (@list2) { - return 1 if $elem1 eq $elem2; - } - } - return 0; -} - -sub intersection { - local($this,*list1,*list2) = @_; - - @intersection_list = (); - foreach $elem1 (@list1) { - foreach $elem2 (@list2) { - push(@intersection_list, $elem1) if ($elem1 eq $elem2) && ! $this->member($elem1, @intersection_list); - } - } - return @intersection_list; -} - -sub cap_intersect_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - $lc_elem1 = lc $elem1; - foreach $elem2 (@list2) { - return 1 if $lc_elem1 eq lc $elem2; - } - } - return 0; -} - -sub subset_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - return 0 unless $this->member($elem1, @list2); - } - return 1; -} - -sub cap_subset_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - return 0 unless $this->cap_member($elem1, @list2); - } - return 1; -} - -sub unique { - local($this, @list) = @_; - - my %seen = (); - @uniq = (); - foreach $item (@list) { - push(@uniq, $item) unless $seen{$item}++; - } - return @uniq; -} - -sub position { - local($this,$elem,@array) = @_; - $i = 0; - foreach $a (@array) { - return $i if $elem eq $a; - $i++; - } - return -1; -} - -sub positions { - local($this,$elem,@array) = @_; - $i = 0; - @positions_in_list = (); - foreach $a (@array) { - push(@positions_in_list, $i) if $elem eq $a; - $i++; - } - return @positions_in_list; -} - -sub last_position { - local($this,$elem,@array) = @_; - - $result = -1; - $i = 0; - foreach $a (@array) { - $result = $i if $elem eq $a; - $i++; - } - return $result; -} - -sub rand_n_digit_number { - local($this,$n) = @_; - - return 0 unless $n =~ /^[1-9]\d*$/; - $ten_power_n = 10 ** ($n - 1); - return int(rand(9 * $ten_power_n)) + $ten_power_n; -} - -# Consider File::Temp -sub new_tmp_filename { - local($this,$filename) = @_; - - $loop_limit = 1000; - ($dir,$simple_filename) = ($filename =~ /^(.+)\/([^\/]+)$/); - $simple_filename = $filename unless defined($simple_filename); - $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename"; - while ((-e $new_filename) && ($loop_limit-- >= 0)) { - $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename"; - } - return $new_filename; -} - -# support sorting order: "8", "8.0", "8.5", "8.5.1.", "8.10", "10", "10-12" - -sub compare_complex_numeric { - local($this,$a,$b) = @_; - - (my $a_num,my $a_rest) = ($a =~ /^(\d+)\D*(.*)$/); - (my $b_num,my $b_rest) = ($b =~ /^(\d+)\D*(.*)$/); - - if (defined($a_rest) && defined($b_rest)) { - return ($a_num <=> $b_num) - || $this->compare_complex_numeric($a_rest,$b_rest); - } else { - return $a cmp $b; - } -} - -# support sorting order: "lesson8-ps-v1.9.xml", "Lesson 10_ps-v_1.11.xml" -# approach: segment strings into alphabetic and numerical sections and compare pairwise - -sub compare_mixed_alpha_numeric { - local($this,$a,$b) = @_; - - ($a_alpha,$a_num,$a_rest) = ($a =~ /^(\D*)(\d[-\d\.]*)(.*)$/); - ($b_alpha,$b_num,$b_rest) = ($b =~ /^(\D*)(\d[-\d\.]*)(.*)$/); - - ($a_alpha) = ($a =~ /^(\D*)/) unless defined $a_alpha; - ($b_alpha) = ($b =~ /^(\D*)/) unless defined $b_alpha; - - # ignore non-alphabetic characters in alpha sections - $a_alpha =~ s/\W|_//g; - $b_alpha =~ s/\W|_//g; - - if ($alpha_cmp = lc $a_alpha cmp lc $b_alpha) { - return $alpha_cmp; - } elsif (defined($a_rest) && defined($b_rest)) { - return $this->compare_complex_numeric($a_num,$b_num) - || $this->compare_mixed_alpha_numeric ($a_rest,$b_rest); - } else { - return (defined($a_num) <=> defined($b_num)) || ($a cmp $b); - } -} - -# @sorted_lessons = sort { NLP::utilities->compare_mixed_alpha_numeric($a,$b) } @lessons; - -sub html_guarded_p { - local($this,$string) = @_; - - return 0 if $string =~ /[<>"]/; - $string .= " "; - @segs = split('&',$string); - shift @segs; - foreach $seg (@segs) { - next if $seg =~ /^[a-z]{2,6};/i; - # next if $seg =~ /^amp;/; - # next if $seg =~ /^quot;/; - # next if $seg =~ /^nbsp;/; - # next if $seg =~ /^gt;/; - # next if $seg =~ /^lt;/; - next if $seg =~ /^#(\d+);/; - next if $seg =~ /^#x([0-9a-fA-F]+);/; - return 0; - } - return 1; -} - -sub guard_tooltip_text { - local($this,$string) = @_; - - $string =~ s/\xCB\x88/'/g; - return $string; -} - -sub guard_html { - local($this,$string,$control_string) = @_; - - return "" unless defined($string); - my $guarded_string; - $control_string = "" unless defined($control_string); - return $string if ($string =~ /&/) - && (! ($control_string =~ /\bstrict\b/)) - && $this->html_guarded_p($string); - $guarded_string = $string; - $guarded_string =~ s/&/&/g; - if ($control_string =~ /slash quote/) { - $guarded_string =~ s/"/\\"/g; - } elsif ($control_string =~ /keep quote/) { - } else { - $guarded_string =~ s/\"/"/g; - } - if ($control_string =~ /escape-slash/) { - $guarded_string =~ s/\//&x2F;/g; - } - $guarded_string =~ s/>/>/g; - $guarded_string =~ s/" : - /^lt$/i ? "<" : - /^x2F$/i ? "/" : - /^nbsp$/i ? "\xC2\xA0" : - /^#(\d+)$/ ? $this->chr($1) : - /^#x([0-9a-f]+)$/i ? $this->chr(hex($1)) : - $_ - }gex; - return $string; -} - -sub unguard_html_r { - local($this,$string) = @_; - - return undef unless defined($string); - - $string =~ s/&/&/g; - $string =~ s/"/'/g; - $string =~ s/<//g; - - ($d) = ($string =~ /&#(\d+);/); - while (defined($d)) { - $c = $this->chr($d); - $string =~ s/&#$d;/$c/g; - ($d) = ($string =~ /&#(\d+);/); - } - ($x) = ($string =~ /&#x([0-9a-f]+);/i); - while (defined($x)) { - $c = $this->chr(hex($x)); - $string =~ s/&#x$x;/$c/g; - ($x) = ($string =~ /&#x([0-9a-f]+);/i); - } - $string0 = $string; - ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i); - while (defined($x)) { - $c = $this->chr("%" . hex($x)); - $string =~ s/\%$x/$c/g; - ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i); - } - return $string; -} - -sub unguard_html_l { - local($caller,$string) = @_; - - return undef unless defined($string); - - my $pre; - my $core; - my $post; - my $repl; - my $s = $string; - if (($pre,$core,$post) = ($s =~ /^(.*)&(amp|quot|lt|gt|#\d+|#x[0-9a-f]+);(.*)$/i)) { - $repl = "?"; - $repl = "&" if $core =~ /^amp$/i; - $repl = "'" if $core =~ /^quot$/i; - $repl = "<" if $core =~ /^lt$/i; - $repl = ">" if $core =~ /^gt$/i; - if ($core =~ /^#\d+$/i) { - $core2 = substr($core,1); - $repl = $caller->chr($core2); - } - $repl = $caller->chr(hex(substr($core,2))) if $core =~ /^#x[0-9a-f]+$/i; - $s = $pre . $repl . $post; - } - return $s; -} - -sub guard_html_quote { - local($caller,$string) = @_; - - $string =~ s/"/"/g; - return $string; -} - -sub unguard_html_quote { - local($caller,$string) = @_; - - $string =~ s/"/"/g; - return $string; -} - -sub uri_encode { - local($caller,$string) = @_; - - $string =~ s/([^^A-Za-z0-9\-_.!~*()'])/ sprintf "%%%02x", ord $1 /eg; - return $string; -} - -sub uri_decode { - local($caller,$string) = @_; - - $string =~ s/%([0-9A-Fa-f]{2})/chr(hex($1))/eg; - return $string; -} - -sub remove_xml_tags { - local($caller,$string) = @_; - - $string =~ s/<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>//g; - return $string; -} - -sub remove_any_tokenization_at_signs_around_xml_tags { - local($caller,$string) = @_; - - $string =~ s/(?:\@ \@)?(<[^<>]+>)(?:\@ \@)?/$1/g; - $string =~ s/\@?(<[^<>]+>)\@?/$1/g; - return $string; -} - -sub remove_xml_tags_and_any_bordering_at_signs { - # at-signs from tokenization - local($caller,$string) = @_; - - $string =~ s/\@?<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>\@?//g; - return $string; -} - -sub chr { - local($caller,$i) = @_; - - return undef unless $i =~ /^\%?\d+$/; - if ($i =~ /^%/) { - $i =~ s/^\%//; - return chr($i) if $i < 128; - return "\x80" | chr($i - 128) if $i < 256; - } else { - return chr($i) if $i < 128; - return ("\xC0" | chr(($i / 64) % 32)) - . ("\x80" | chr($i % 64)) if $i < 2048; - return ("\xE0" | chr(int($i / 4096) % 16)) - . ("\x80" | chr(int($i / 64) % 64)) - . ("\x80" | chr($i % 64)) if $i < 65536; - return ("\xF0" | chr(int($i / 262144) % 8)) - . ("\x80" | chr(int($i / 4096) % 64)) - . ("\x80" | chr(int($i / 64) % 64)) - . ("\x80" | chr($i % 64)) if $i < 2097152; - } - return "?"; -} - -sub guard_cgi { - local($caller, $string) = @_; - - $guarded_string = $string; - if ($string =~ /[\x80-\xFF]/) { - $guarded_string = ""; - while ($string ne "") { - $char = substr($string, 0, 1); - $string = substr($string, 1); - if ($char =~ /^[\\ ;\#\&\:\=\"\'\+\?\x00-\x1F\x80-\xFF]$/) { - $hex = sprintf("%2.2x",ord($char)); - $guarded_string .= uc "%$hex"; - } else { - $guarded_string .= $char; - } - } - } else { - $guarded_string = $string; - $guarded_string =~ s/%/%25/g; - $guarded_string =~ s/\n/%5Cn/g; - $guarded_string =~ s/\t/%5Ct/g; - $guarded_string =~ s/ /%20/g; - $guarded_string =~ s/"/%22/g; - $guarded_string =~ s/#/%23/g; - $guarded_string =~ s/&/%26/g; - $guarded_string =~ s/'/%27/g; - $guarded_string =~ s/\+/%2B/g; - $guarded_string =~ s/\//%2F/g; - $guarded_string =~ s/:/%3A/g; - $guarded_string =~ s/;/%3B/g; - $guarded_string =~ s//%3E/g; - $guarded_string =~ s/\?/%3F/g; - } - return $guarded_string; -} - -sub repair_cgi_guard { - local($caller,$string) = @_; - # undo second cgi-guard, e.g. "Jo%25C3%25ABlle_Aubron" -> "Jo%C3%ABlle_Aubron" - - $string =~ s/(%)25([CD][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3/g; - $string =~ s/(%)25(E[0-9A-F]%)25([89AB][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3$4/g; - return $string; -} - -sub unguard_cgi { - local($caller,$string) = @_; - - $unguarded_string = $string; - $unguarded_string =~ s/%5Cn/\n/g; - $unguarded_string =~ s/%5Ct/\t/g; - $unguarded_string =~ s/%20/ /g; - $unguarded_string =~ s/%23/#/g; - $unguarded_string =~ s/%26/&/g; - $unguarded_string =~ s/%2B/+/g; - $unguarded_string =~ s/%2C/,/g; - $unguarded_string =~ s/%3A/:/g; - $unguarded_string =~ s/%3D/=/g; - $unguarded_string =~ s/%3F/?/g; - $unguarded_string =~ s/%C3%A9/\xC3\xA9/g; - - # more general - ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/); - while (defined($code)) { - $percent_code = "%" . $code; - $hex_code = sprintf("%c", hex($code)); - $unguarded_string =~ s/$percent_code/$hex_code/g; - ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/); - } - - return $unguarded_string; -} - -sub regex_guard { - local($caller,$string) = @_; - - $guarded_string = $string; - $guarded_string =~ s/([\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]])/\\$1/g - if $guarded_string =~ /[\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]]/; - - return $guarded_string; -} - -sub g_regex_spec_tok_p { - local($this,$string) = @_; - - # specials: ( ) (?: ) [ ] - return ($string =~ /^(\(\?:|[()\[\]])$/); -} - -sub regex_guard_norm { - local($this,$string) = @_; - - return $string unless $string =~ /[\[\]\\()$@?+]/; - my $rest = $string; - my @stack = (""); - while ($rest ne "") { - # specials: ( ) (?: ) [ ] ? + - if (($pre, $special, $post) = ($rest =~ /^((?:\\.|[^\[\]()?+])*)(\(\?:|[\[\]()?+])(.*)$/)) { - # print STDERR "Special: $pre *$special* $post\n"; - unless ($pre eq "") { - push(@stack, $pre); - while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1])) - && (! $this->g_regex_spec_tok_p($stack[$#stack]))) { - $s1 = pop @stack; - $s2 = pop @stack; - push(@stack, "$s2$s1"); - } - } - if ($special =~ /^[?+]$/) { - push(@stack, "\\") if ($stack[$#stack] eq "") - || ($this->g_regex_spec_tok_p($stack[$#stack]) && ($stack[$#stack] ne "[")); - push(@stack, $special); - } elsif ($special eq "]") { - if (($#stack >= 1) && ($stack[$#stack-1] eq "[") && ! $this->g_regex_spec_tok_p($stack[$#stack])) { - $char_expression = pop @stack; - pop @stack; - push(@stack, "[$char_expression]"); - } else { - push(@stack, $special); - } - } elsif (($special =~ /^[()]/) && (($stack[$#stack] eq "[") - || (($#stack >= 1) - && ($stack[$#stack-1] eq "[") - && ! $this->g_regex_spec_tok_p($stack[$#stack])))) { - push(@stack, "\\$special"); - } elsif ($special eq ")") { - if (($#stack >= 1) && ($stack[$#stack-1] =~ /^\((\?:)?$/) && ! $this->g_regex_spec_tok_p($stack[$#stack])) { - $alt_expression = pop @stack; - $open_para = pop @stack; - if ($open_para eq "(") { - push(@stack, "(?:$alt_expression)"); - } else { - push(@stack, "$open_para$alt_expression)"); - } - } else { - push(@stack, $special); - } - } else { - push(@stack, $special); - } - while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1])) - && (! $this->g_regex_spec_tok_p($stack[$#stack]))) { - $s1 = pop @stack; - $s2 = pop @stack; - push(@stack, "$s2$s1"); - } - $rest = $post; - } else { - push(@stack, $rest); - $rest = ""; - } - } - # print STDERR "Stack: " . join(";", @stack) . "\n"; - foreach $i ((0 .. $#stack)) { - $stack_elem = $stack[$i]; - if ($stack_elem =~ /^[()\[\]]$/) { - $stack[$i] = "\\" . $stack[$i]; - } - } - return join("", @stack); -} - -sub string_guard { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/([\\"])/\\$1/g - if $guarded_string =~ /[\\"]/; - - return $guarded_string; -} - -sub json_string_guard { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/([\\"])/\\$1/g - if $guarded_string =~ /[\\"]/; - $guarded_string =~ s/\r*\n/\\n/g - if $guarded_string =~ /\n/; - - return $guarded_string; -} - -sub json_string_unguard { - local($caller,$string) = @_; - - return "" unless defined($string); - $string =~ s/\\n/\n/g - if $string =~ /\\n/; - return $string; -} - -sub guard_javascript_arg { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/\\/\\\\/g; - $guarded_string =~ s/'/\\'/g; - return $guarded_string; -} - -sub guard_substitution_right_hand_side { - # "$1x" => "$1 . \"x\"" - local($caller,$string) = @_; - - my $result = ""; - ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/); - while (defined($var)) { - $result .= " . " if $result; - $result .= "\"$pre\" . " unless $pre eq ""; - $result .= $var; - $string = $post; - ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/); - } - $result .= " . \"$string\"" if $string; - return $result; -} - -sub string_starts_with_substring { - local($caller,$string,$substring) = @_; - - $guarded_substring = $caller->regex_guard($substring); - return $string =~ /^$guarded_substring/; -} - -sub one_string_starts_with_the_other { - local($caller,$s1,$s2) = @_; - - return ($s1 eq $s2) - || $caller->string_starts_with_substring($s1,$s2) - || $caller->string_starts_with_substring($s2,$s1); -} - -sub string_ends_in_substring { - local($caller,$string,$substring) = @_; - - $guarded_substring = $caller->regex_guard($substring); - return $string =~ /$guarded_substring$/; -} - -sub string_equal_ignore_leading_multiple_or_trailing_blanks { - local($caller,$string1,$string2) = @_; - - return 1 if $string1 eq $string2; - $string1 =~ s/\s+/ /; - $string2 =~ s/\s+/ /; - $string1 =~ s/^\s+//; - $string2 =~ s/^\s+//; - $string1 =~ s/\s+$//; - $string2 =~ s/\s+$//; - - return $string1 eq $string2; -} - -sub strip_substring_from_start_of_string { - local($caller,$string,$substring,$error_code) = @_; - - $error_code = "ERROR" unless defined($error_code); - my $reg_surf = $caller->regex_guard($substring); - if ($string =~ /^$guarded_substring/) { - $string =~ s/^$reg_surf//; - return $string; - } else { - return $error_code; - } -} - -sub strip_substring_from_end_of_string { - local($caller,$string,$substring,$error_code) = @_; - - $error_code = "ERROR" unless defined($error_code); - my $reg_surf = $caller->regex_guard($substring); - if ($string =~ /$reg_surf$/) { - $string =~ s/$reg_surf$//; - return $string; - } else { - return $error_code; - } -} - -# to be deprecated -sub lang_code { - local($caller,$language) = @_; - - $langPM = NLP::Language->new(); - return $langPM->lang_code($language); -} - -sub full_language { - local($caller,$lang_code) = @_; - - return "Arabic" if $lang_code eq "ar"; - return "Chinese" if $lang_code eq "zh"; - return "Czech" if $lang_code eq "cs"; - return "Danish" if $lang_code eq "da"; - return "Dutch" if $lang_code eq "nl"; - return "English" if $lang_code eq "en"; - return "Finnish" if $lang_code eq "fi"; - return "French" if $lang_code eq "fr"; - return "German" if $lang_code eq "de"; - return "Greek" if $lang_code eq "el"; - return "Hebrew" if $lang_code eq "he"; - return "Hindi" if $lang_code eq "hi"; - return "Hungarian" if $lang_code eq "hu"; - return "Icelandic" if $lang_code eq "is"; - return "Indonesian" if $lang_code eq "id"; - return "Italian" if $lang_code eq "it"; - return "Japanese" if $lang_code eq "ja"; - return "Kinyarwanda" if $lang_code eq "rw"; - return "Korean" if $lang_code eq "ko"; - return "Latin" if $lang_code eq "la"; - return "Malagasy" if $lang_code eq "mg"; - return "Norwegian" if $lang_code eq "no"; - return "Pashto" if $lang_code eq "ps"; - return "Persian" if $lang_code eq "fa"; - return "Polish" if $lang_code eq "pl"; - return "Portuguese" if $lang_code eq "pt"; - return "Romanian" if $lang_code eq "ro"; - return "Russian" if $lang_code eq "ru"; - return "Spanish" if $lang_code eq "es"; - return "Swedish" if $lang_code eq "sv"; - return "Turkish" if $lang_code eq "tr"; - return "Urdu" if $lang_code eq "ur"; - return ""; -} - -# to be deprecated -sub short_lang_name { - local($caller,$lang_code) = @_; - - $langPM = NLP::Language->new(); - return $langPM->shortname($lang_code); -} - -sub ml_dir { - local($caller,$language,$type) = @_; - - $type = "MSB" unless defined($type); - $lang_code = $langPM->lang_code($language); - return $caller->ml_dir($lang_code, "lex") . "/corpora" if $type eq "corpora"; - return "" unless defined($rc); - $ml_home = $rc->ml_home_dir(); - return File::Spec->catfile($ml_home, "arabic") - if ($lang_code eq "ar-iq") && ! $caller->member(lc $type,"lex","onto","dict"); - $langPM = NLP::Language->new(); - $lexdir = $langPM->lexdir($lang_code); - return $lexdir if defined($lexdir); - return ""; -} - -sub language_lex_filename { - local($caller,$language,$type) = @_; - - $langPM = NLP::Language->new(); - if (($lang_code = $langPM->lang_code($language)) - && ($ml_dir = $caller->ml_dir($lang_code,$type)) - && ($norm_language = $caller->short_lang_name($lang_code))) { - return "$ml_dir/$norm_language-lex" if ($type eq "lex"); - return "$ml_dir/onto" if ($type eq "onto"); - return "$ml_dir/$norm_language-english-dict" if ($type eq "dict") && !($lang_code eq "en"); - return ""; - } else { - return ""; - } -} - -# filename_without_path is obsolete - replace with -# use File::Basename; -# basename($filename) -sub filename_without_path { - local($caller,$filename) = @_; - - $filename =~ s/^.*\/([^\/]+)$/$1/; - return $filename; -} - -sub option_string { - local($caller,$input_name,$default,*values,*labels) = @_; - - my $s = ""; - return $s; -} - -sub pes_subseq_surf { - local($this,$start,$length,$langCode,@pes) = @_; - - my $surf = ""; - if ($start+$length-1 <= $#pes) { - foreach $i ($start .. $start + $length - 1) { - my $pe = $pes[$i]; - $surf .= $pe->get("surf",""); - $surf .= " " if $langCode =~ /^(ar|en|fr)$/; - } - } - $surf =~ s/\s+$//; - return $surf; -} - -sub copyList { - local($this,@list) = @_; - - @copy_list = (); - foreach $elem (@list) { - push(@copy_list,$elem); - } - return @copy_list; -} - -sub list_with_same_elem { - local($this,$size,$elem) = @_; - - @list = (); - foreach $i (0 .. $size-1) { - push(@list,$elem); - } - return @list; -} - -sub count_occurrences { - local($this,$s,$substring) = @_; - - $occ = 0; - $new = $s; - $guarded_substring = $this->regex_guard($substring); - $new =~ s/$guarded_substring//; - while ($new ne $s) { - $occ++; - $s = $new; - $new =~ s/$guarded_substring//; - } - return $occ; -} - -sub position_of_nth_occurrence { - local($this,$s,$substring,$occ) = @_; - - return -1 unless $occ > 0; - my $pos = 0; - while (($pos = index($s, $substring, $pos)) >= 0) { - return $pos if $occ == 1; - $occ--; - $pos = $pos + length($substring); - } - return -1; -} - -sub has_diff_elements_p { - local($this,@array) = @_; - - return 0 if $#array < 1; - $elem = $array[0]; - - foreach $a (@array) { - return 1 if $elem ne $a; - } - return 0; -} - -sub init_log { - local($this,$logfile, $control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - system("rm -f $logfile"); - system("date > $logfile; chmod 777 $logfile"); - } -} - -sub time_stamp_log { - local($this,$logfile, $control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - system("date >> $logfile; chmod 777 $logfile"); - } -} - -sub log { - local($this,$message,$logfile,$control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - $this->init_log($logfile, $control) unless -w $logfile; - if ($control =~ /timestamp/i) { - $this->time_stamp_log($logfile, $control); - } - $guarded_message = $message; - $guarded_message =~ s/"/\\"/g; - system("echo \"$guarded_message\" >> $logfile"); - } -} - -sub month_name_to_month_number { - local($this,$month_name) = @_; - - $month_name_init = lc substr($month_name,0,3); - return $this->position($month_name_init, "jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec") + 1; -} - -my @short_month_names = ("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec."); -my @full_month_names = ("January","February","March","April","May","June","July","August","September","October","November","December"); - -sub month_number_to_month_name { - local($this,$month_number, $control) = @_; - - $month_number =~ s/^0//; - if ($month_number =~ /^([1-9]|1[0-2])$/) { - return ($control && ($control =~ /short/i)) - ? $short_month_names[$month_number-1] - : $full_month_names[$month_number-1]; - } else { - return ""; - } -} - -sub leap_year { - local($this,$year) = @_; - - return 0 if $year % 4 != 0; - return 1 if $year % 400 == 0; - return 0 if $year % 100 == 0; - return 1; -} - -sub datetime { - local($this,$format,$time_in_secs, $command) = @_; - - $command = "" unless defined($command); - $time_in_secs = time unless defined($time_in_secs) && $time_in_secs; - @time_vector = ($command =~ /\b(gm|utc)\b/i) ? gmtime($time_in_secs) : localtime($time_in_secs); - ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=@time_vector; - $thisyear = $year + 1900; - $thismon=(Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec)[$mon]; - $thismon2=("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec.")[$mon]; - $thismonth = $mon + 1; - $thisday=(Sun,Mon,Tue,Wed,Thu,Fri,Sat)[$wday]; - $milliseconds = int(($time_in_secs - int($time_in_secs)) * 1000); - $date="$thisday $thismon $mday, $thisyear"; - $sdate="$thismon $mday, $thisyear"; - $dashedDate = sprintf("%04d-%02d-%02d",$thisyear,$thismonth,$mday); - $slashedDate = sprintf("%02d/%02d/%04d",$mday,$thismonth,$thisyear); - $time=sprintf("%02d:%02d:%02d",$hour,$min,$sec); - $shorttime=sprintf("%d:%02d",$hour,$min); - $shortdatetime = "$thismon2 $mday, $shorttime"; - - if ($date =~ /undefined/) { - return ""; - } elsif ($format eq "date at time") { - return "$date at $time"; - } elsif ($format eq "date") { - return "$date"; - } elsif ($format eq "sdate") { - return "$sdate"; - } elsif ($format eq "ddate") { - return "$dashedDate"; - } elsif ($format eq "time") { - return "$time"; - } elsif ($format eq "dateTtime+ms") { - return $dashedDate . "T" . $time . "." . $milliseconds; - } elsif ($format eq "dateTtime") { - return $dashedDate . "T" . $time; - } elsif ($format eq "yyyymmdd") { - return sprintf("%04d%02d%02d",$thisyear,$thismonth,$mday); - } elsif ($format eq "short date at time") { - return $shortdatetime; - } else { - return "$date at $time"; - } -} - -sub datetime_of_last_file_modification { - local($this,$format,$filename) = @_; - - return $this->datetime($format,(stat($filename))[9]); -} - -sub add_1sec { - local($this,$datetime) = @_; - - if (($year,$month,$day,$hour,$minute,$second) = ($datetime =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)$/)) { - $second++; - if ($second >= 60) { $second -= 60; $minute++; } - if ($minute >= 60) { $minute -= 60; $hour++; } - if ($hour >= 24) { $hour -= 24; $day++; } - if ($month =~ /^(01|03|05|07|08|10|12)$/) { - if ($day > 31) { $day -= 31; $month++; } - } elsif ($month =~ /^(04|06|09|11)$/) { - if ($day > 30) { $day -= 30; $month++; } - } elsif (($month eq "02") && $this->leap_year($year)) { - if ($day > 29) { $day -= 29; $month++; } - } elsif ($month eq "02") { - if ($day > 28) { $day -= 28; $month++; } - } - if ($month > 12) { $month -= 12; $year++; } - return sprintf("%04d-%02d-%02dT%02d:%02d:%02d", $year,$month,$day,$hour,$minute,$second); - } else { - return ""; - } -} - -sub stopwatch { - local($this, $function, $id, *ht, *OUT) = @_; - # function: start|stop|count|report; start|stop times are absolute (in secs.) - - my $current_time = time; - # print OUT "Point S stopwatch $function $id $current_time\n"; - if ($function eq "start") { - if ($ht{STOPWATCH_START}->{$id}) { - $ht{STOPWATCH_N_RESTARTS}->{$id} = ($ht{STOPWATCH_N_RESTARTS}->{$id} || 0) + 1; - } else { - $ht{STOPWATCH_START}->{$id} = $current_time; - } - } elsif ($function eq "end") { - if ($start_time = $ht{STOPWATCH_START}->{$id}) { - $ht{STOPWATCH_TIME}->{$id} = ($ht{STOPWATCH_TIME}->{$id} || 0) + ($current_time - $start_time); - $ht{STOPWATCH_START}->{$id} = ""; - } else { - $ht{STOPWATCH_N_DEAD_ENDS}->{$id} = ($ht{STOPWATCH_N_DEAD_ENDS}->{$id} || 0) + 1; - } - } elsif ($function eq "count") { - $ht{STOPWATCH_COUNT}->{$id} = ($ht{STOPWATCH_COUNT}->{$id} || 0) + 1; - } elsif ($function eq "report") { - my $id2; - foreach $id2 (keys %{$ht{STOPWATCH_START}}) { - if ($start_time = $ht{STOPWATCH_START}->{$id2}) { - $ht{STOPWATCH_TIME}->{$id2} = ($ht{STOPWATCH_TIME}->{$id2} || 0) + ($current_time - $start_time); - $ht{STOPWATCH_START}->{$id2} = $current_time; - } - } - print OUT "Time report:\n"; - foreach $id2 (sort { $ht{STOPWATCH_TIME}->{$b} <=> $ht{STOPWATCH_TIME}->{$a} } - keys %{$ht{STOPWATCH_TIME}}) { - my $stopwatch_time = $ht{STOPWATCH_TIME}->{$id2}; - $stopwatch_time = $this->round_to_n_decimal_places($stopwatch_time, 3); - my $n_restarts = $ht{STOPWATCH_N_RESTARTS}->{$id2}; - my $n_dead_ends = $ht{STOPWATCH_N_DEAD_ENDS}->{$id2}; - my $start_time = $ht{STOPWATCH_START}->{$id2}; - print OUT " $id2: $stopwatch_time seconds"; - print OUT " with $n_restarts restart(s)" if $n_restarts; - print OUT " with $n_dead_ends dead end(s)" if $n_dead_ends; - print OUT " (active)" if $start_time; - print OUT "\n"; - } - foreach $id2 (sort { $ht{STOPWATCH_COUNT}->{$b} <=> $ht{STOPWATCH_COUNT}->{$a} } - keys %{$ht{STOPWATCH_COUNT}}) { - $count = $ht{STOPWATCH_COUNT}->{$id2}; - print OUT " C $id2: $count\n"; - } - } -} - -sub print_html_banner { - local($this,$text,$bgcolor,*OUT,$control) = @_; - - $control = "" unless defined($control); - $bgcolor = "#BBCCFF" unless defined($bgcolor); - print OUT "
        "; - print OUT "  " unless $text =~ /^\s*<(table|nobr)/; - print OUT $text; - print OUT "
        \n"; - print OUT "
        \n" unless $control =~ /nobr/i; -} - -sub print_html_head { - local($this, $title, *OUT, $control, $onload_fc, $add_javascript) = @_; - - $control = "" unless defined($control); - $onload_fc = "" unless defined($onload_fc); - $onload_clause = ($onload_fc) ? " onload=\"$onload_fc\"" : ""; - $add_javascript = "" unless defined($add_javascript); - $max_age_clause = ""; - $max_age_clause = ""; # if $control =~ /\bexp1hour\b/; - $css_clause = ""; - $css_clause = "\n " if $control =~ /css/; - $css_clause .= "\n " if $control =~ /css/; - $css_clause = "\n " if $control =~ /css-handheld/; - $icon_clause = ""; - $icon_clause .= "\n " if $control =~ /\bAMR\b/i; - $icon_clause .= "\n " if $control =~ /\bCRE\b/i; - print OUT "\xEF\xBB\xBF\n" unless $control =~ /\bno-bom\b/; # utf8 marker byte order mark - print OUT< - - - $max_age_clause - $title$css_clause$icon_clause -END_OF_HEADER1 -; - - unless ($control =~ /no javascript/) { - print OUT< - - -END_OF_HEADER2 -; - } - - print OUT< - -END_OF_HEADER3 -; -} - - -sub print_html_foot { - local($this, *OUT) = @_; - - print OUT " \n"; - print OUT "\n"; -} - -sub print_html_page { - local($this, *OUT, $s) = @_; - - print OUT "\xEF\xBB\xBF\n"; - print OUT "\n"; - print OUT " \n"; - print OUT " DEBUG\n"; - print OUT " \n"; - print OUT " \n"; - print OUT " \n"; - print OUT " \n"; - print OUT " $s\n"; - print OUT " \n"; - print OUT "\n"; -} - -sub http_catfile { - local($this, @path) = @_; - - $result = File::Spec->catfile(@path); - $result =~ s/(https?):\/([a-zA-Z])/$1:\/\/$2/; - return $result; -} - -sub underscore_to_space { - local($this, $s) = @_; - - return "" unless defined($s); - - $s =~ s/_+/ /g; - return $s; -} - -sub space_to_underscore { - local($this, $s) = @_; - - return "" unless defined($s); - - $s =~ s/ /_/g; - return $s; -} - -sub remove_spaces { - local($this, $s) = @_; - - $s =~ s/\s//g; - return $s; -} - -sub is_punctuation_string_p { - local($this, $s) = @_; - - return "" unless $s; - $s = $this->normalize_string($s) if $s =~ /[\x80-\xBF]/; - return $s =~ /^[-_,;:.?!\/\@+*"()]+$/; -} - -sub is_rare_punctuation_string_p { - local($this, $s) = @_; - - return 0 unless $s =~ /^[\x21-\x2F\x3A\x40\x5B-\x60\x7B-\x7E]{2,}$/; - return 0 if $s =~ /^(\.{2,3}|-{2,3}|\*{2,3}|::|\@?[-\/:]\@?)$/; - return 1; -} - -sub simplify_punctuation { - local($this, $s) = @_; - - $s =~ s/\xE2\x80\x92/-/g; - $s =~ s/\xE2\x80\x93/-/g; - $s =~ s/\xE2\x80\x94/-/g; - $s =~ s/\xE2\x80\x95/-/g; - $s =~ s/\xE2\x80\x98/`/g; - $s =~ s/\xE2\x80\x99/'/g; - $s =~ s/\xE2\x80\x9A/`/g; - $s =~ s/\xE2\x80\x9C/"/g; - $s =~ s/\xE2\x80\x9D/"/g; - $s =~ s/\xE2\x80\x9E/"/g; - $s =~ s/\xE2\x80\x9F/"/g; - $s =~ s/\xE2\x80\xA2/*/g; - $s =~ s/\xE2\x80\xA4/./g; - $s =~ s/\xE2\x80\xA5/../g; - $s =~ s/\xE2\x80\xA6/.../g; - return $s; -} - -sub latin_plus_p { - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - return $s =~ /^([\x20-\x7E]|\xC2[\xA1-\xBF]|[\xC3-\xCC][\x80-\xBF]|\xCA[\x80-\xAF]|\xE2[\x80-\xAF][\x80-\xBF])+$/; -} - -sub nth_line_in_file { - local($this, $filename, $n) = @_; - - return "" unless $n =~ /^[1-9]\d*$/; - open(IN, $filename) || return ""; - my $line_no = 0; - while () { - $line_no++; - if ($n == $line_no) { - $_ =~ s/\s+$//; - close(IN); - return $_; - } - } - close(IN); - return ""; -} - -sub read_file { - local($this, $filename) = @_; - - my $file_content = ""; - open(IN, $filename) || return ""; - while () { - $file_content .= $_; - } - close(IN); - return $file_content; -} - -sub cap_list { - local($this, @list) = @_; - - @cap_list = (); - foreach $l (@list) { - ($premod, $core) = ($l =~ /^(a|an) (\S.*)$/); - if (defined($premod) && defined($core)) { - push(@cap_list, "$premod \u$core"); - } elsif ($this->cap_member($l, "US")) { - push(@cap_list, uc $l); - } else { - push(@cap_list, "\u$l"); - } - } - return @cap_list; -} - -sub integer_list_with_commas_and_ranges { - local($this, @list) = @_; - - my $in_range_p = 0; - my $last_value = 0; - my $result = ""; - while (@list) { - $elem = shift @list; - if ($elem =~ /^\d+$/) { - if ($in_range_p) { - if ($elem == $last_value + 1) { - $last_value = $elem; - } else { - $result .= "-$last_value, $elem"; - if (@list && ($next = $list[0]) && ($elem =~ /^\d+$/) && ($next =~ /^\d+$/) - && ($next == $elem + 1)) { - $last_value = $elem; - $in_range_p = 1; - } else { - $in_range_p = 0; - } - } - } else { - $result .= ", $elem"; - if (@list && ($next = $list[0]) && ($elem =~ /^\d+$/) && ($next =~ /^\d+$/) - && ($next == $elem + 1)) { - $last_value = $elem; - $in_range_p = 1; - } - } - } else { - if ($in_range_p) { - $result .= "-$last_value, $elem"; - $in_range_p = 0; - } else { - $result .= ", $elem"; - } - } - } - if ($in_range_p) { - $result .= "-$last_value"; - } - $result =~ s/^,\s*//; - return $result; -} - -sub comma_append { - local($this, $a, $b) = @_; - - if (defined($a) && ($a =~ /\S/)) { - if (defined($b) && ($b =~ /\S/)) { - return "$a,$b"; - } else { - return $a; - } - } else { - if (defined($b) && ($b =~ /\S/)) { - return $b; - } else { - return ""; - } - } -} - -sub version { - return "3.17"; -} - -sub print_stderr { - local($this, $message, $verbose) = @_; - - $verbose = 1 unless defined($verbose); - print STDERR $message if $verbose; - return 1; -} - -sub print_log { - local($this, $message, *LOG, $verbose) = @_; - - $verbose = 1 unless defined($verbose); - print LOG $message if $verbose; - return 1; -} - -sub compare_alignment { - local($this, $a, $b, $delimiter) = @_; - - $delimiter = "-" unless $delimiter; - my @a_list = split($delimiter, $a); - my @b_list = split($delimiter, $b); - - while (@a_list && @b_list) { - $a_head = shift @a_list; - $b_head = shift @b_list; - next if $a_head eq $b_head; - return $a_head <=> $b_head if ($a_head =~ /^\d+$/) && ($b_head =~ /^\d+$/); - return $a_head cmp $b_head; - } - return -1 if @a_list; - return 1 if @b_list; - return 0; -} - -sub normalize_string { - # normalize punctuation, full-width characters (to ASCII) - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - - $norm_s = $s; - $norm_s =~ tr/A-Z/a-z/; - - $norm_s =~ s/ \@([-:\/])/ $1/g; # non-initial left @ - $norm_s =~ s/^\@([-:\/])/$1/; # initial left @ - $norm_s =~ s/([-:\/])\@ /$1 /g; # non-initial right @ - $norm_s =~ s/([-:\/])\@$/$1/; # initial right @ - $norm_s =~ s/([\(\)"])([,;.?!])/$1 $2/g; - $norm_s =~ s/\bcannot\b/can not/g; - - $norm_s =~ s/\xC2\xAD/-/g; # soft hyphen - - $norm_s =~ s/\xE2\x80\x94/-/g; # em dash - $norm_s =~ s/\xE2\x80\x95/-/g; # horizontal bar - $norm_s =~ s/\xE2\x80\x98/`/g; # grave accent - $norm_s =~ s/\xE2\x80\x99/'/g; # apostrophe - $norm_s =~ s/\xE2\x80\x9C/"/g; # left double quote mark - $norm_s =~ s/\xE2\x80\x9D/"/g; # right double quote mark - $norm_s =~ s/\xE2\x94\x80/-/g; # box drawings light horizontal - $norm_s =~ s/\xE2\x94\x81/-/g; # box drawings heavy horizontal - $norm_s =~ s/\xE3\x80\x81/,/g; # ideographic comma - $norm_s =~ s/\xE3\x80\x82/./g; # ideographic full stop - $norm_s =~ s/\xE3\x80\x88/"/g; # left angle bracket - $norm_s =~ s/\xE3\x80\x89/"/g; # right angle bracket - $norm_s =~ s/\xE3\x80\x8A/"/g; # left double angle bracket - $norm_s =~ s/\xE3\x80\x8B/"/g; # right double angle bracket - $norm_s =~ s/\xE3\x80\x8C/"/g; # left corner bracket - $norm_s =~ s/\xE3\x80\x8D/"/g; # right corner bracket - $norm_s =~ s/\xE3\x80\x8E/"/g; # left white corner bracket - $norm_s =~ s/\xE3\x80\x8F/"/g; # right white corner bracket - $norm_s =~ s/\xE3\x83\xBB/\xC2\xB7/g; # katakana middle dot -> middle dot - $norm_s =~ s/\xEF\xBB\xBF//g; # UTF8 marker - - if ($control =~ /\bzh\b/i) { - # de-tokenize Chinese - unless ($control =~ /\bpreserve-tok\b/) { - while ($norm_s =~ /[\xE0-\xEF][\x80-\xBF][\x80-\xBF] [\xE0-\xEF][\x80-\xBF][\x80-\xBF]/) { - $norm_s =~ s/([\xE0-\xEF][\x80-\xBF][\x80-\xBF]) ([\xE0-\xEF][\x80-\xBF][\x80-\xBF])/$1$2/g; - } - $norm_s =~ s/([\xE0-\xEF][\x80-\xBF][\x80-\xBF]) ([\x21-\x7E])/$1$2/g; - $norm_s =~ s/([\x21-\x7E]) ([\xE0-\xEF][\x80-\xBF][\x80-\xBF])/$1$2/g; - } - - # fullwidth characters - while ($norm_s =~ /\xEF\xBC[\x81-\xBF]/) { - ($pre,$fullwidth,$post) = ($norm_s =~ /^(.*)(\xEF\xBC[\x81-\xBF])(.*)$/); - $fullwidth =~ s/^\xEF\xBC//; - $fullwidth =~ tr/[\x81-\xBF]/[\x21-\x5F]/; - $norm_s = "$pre$fullwidth$post"; - } - while ($norm_s =~ /\xEF\xBD[\x80-\x9E]/) { - ($pre,$fullwidth,$post) = ($norm_s =~ /^(.*)(\xEF\xBD[\x80-\x9E])(.*)$/); - $fullwidth =~ s/^\xEF\xBD//; - $fullwidth =~ tr/[\x80-\x9E]/[\x60-\x7E]/; - $norm_s = "$pre$fullwidth$post"; - } - $norm_s =~ tr/A-Z/a-z/ unless $control =~ /\bpreserve-case\b/; - - unless ($control =~ /\bpreserve-tok\b/) { - while ($norm_s =~ /[\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E] [\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]/) { - $norm_s =~ s/([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]) ([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E])/$1$2/g; - } - $norm_s =~ s/([\x21-\x7E]) ([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E])/$1$2/g; - $norm_s =~ s/([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]) ([\x21-\x7E])/$1$2/g; - $norm_s =~ s/ (\xC2\xA9|\xC2\xB7|\xC3\x97) /$1/g; # copyright sign, middle dot, multiplication sign - } - } - - if (($control =~ /\bzh\b/i) && ($control =~ /\bnorm-char\b/)) { - $norm_s =~ s/\xE6\x96\xBC/\xE4\xBA\x8E/g; # feng1 (first char. of Chin. "lie low", line 1308) - $norm_s =~ s/\xE6\xAD\xA7/\xE5\xB2\x90/g; # qi2 (second char. of Chin. "difference", line 1623) - $norm_s =~ s/\xE8\x82\xB2/\xE6\xAF\x93/g; # yu4 (second char. of Chin. "sports", line 440) - $norm_s =~ s/\xE8\x91\x97/\xE7\x9D\x80/g; # zhao (second char. of Chin. "prominent", line 4) - $norm_s =~ s/\xE9\x81\x87/\xE8\xBF\x82/g; # yu4 (second char. of Chin. "good luck", line 959) - } - - if ($control =~ /\bspurious-punct\b/) { - $norm_s =~ s/^\s*[-_\." ]+//; - $norm_s =~ s/[-_\." ]+\s*$//; - $norm_s =~ s/\(\s+end\s+\)\s*$//i; - $norm_s =~ s/^\s*null\s*$//i; - } - - $norm_s =~ s/^\s+//; - $norm_s =~ s/\s+$//; - $norm_s =~ s/\s+/ /g; - - return $norm_s; -} - -sub normalize_extreme_string { - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - - $norm_s = $s; - $norm_s =~ s/\xE2\xA9\xBE/\xE2\x89\xA5/g; # slanted greater than or equal to - - return $norm_s; -} - -sub increase_ht_count { - local($this, *ht, $incr, @path) = @_; - - if ($#path == 0) { - $ht{($path[0])} = ($ht{($path[0])} || 0) + $incr; - } elsif ($#path == 1) { - $ht{($path[0])}->{($path[1])} - = ($ht{($path[0])}->{($path[1])} || 0) + $incr; - } elsif ($#path == 2) { - $ht{($path[0])}->{($path[1])}->{($path[2])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])} || 0) + $incr; - } elsif ($#path == 3) { - $ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])} || 0) + $incr; - } elsif ($#path == 4) { - $ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])}->{($path[4])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])}->{($path[4])} || 0) + $incr; - } else { - print STDERR "increase_ht_count unsupported for path of length " . ($#path + 1) . "\n"; - } -} - -sub adjust_numbers { - # non-negative integers - local($this, $s, $delta) = @_; - - $result = ""; - while ($s =~ /\d/) { - ($pre,$i,$post) = ($s =~ /^([^0-9]*)(\d+)([^0-9].*|)$/); - $result .= $pre . ($i + $delta); - $s = $post; - } - $result .= $s; - return $result; -} - -sub first_defined { - local($this, @list) = @_; - - foreach $elem (@list) { - return $elem if defined($elem); - } - return ""; -} - -sub first_defined_non_empty { - local($this, @list) = @_; - - foreach $item (@list) { - return $item if defined($item) && ($item ne ""); - } - return ""; -} - -sub elem_after_member_list { - local($this,$elem,@array) = @_; - - my @elem_after_member_list = (); - foreach $i ((0 .. ($#array - 1))) { - push(@elem_after_member_list, $array[$i+1]) if $elem eq $array[$i]; - } - return join(" ", @elem_after_member_list); -} - -sub add_value_to_list { - local($this,$s,$value,$sep) = @_; - - $s = "" unless defined($s); - $sep = "," unless defined($sep); - return ($s =~ /\S/) ? "$s$sep$value" : $value; -} - -sub add_new_value_to_list { - local($this,$s,$value,$sep) = @_; - - $s = "" unless defined($s); - $sep = "," unless defined($sep); - my @values = split(/$sep/, $s); - push(@values, $value) if defined($value) && ! $this->member($value, @values); - - return join($sep, @values); -} - -sub add_new_hash_value_to_list { - local($this,*ht,$key,$value,$sep) = @_; - - $sep = "," unless defined($sep); - my $value_s = $ht{$key}; - if (defined($value_s)) { - my @values = split(/$sep/, $value_s); - push(@values, $value) unless $this->member($value, @values); - $ht{$key} = join($sep, @values); - } else { - $ht{$key} = $value; - } -} - -sub ip_info { - local($this, $ip_address) = @_; - - my %ip_map = (); - $ip_map{"128.9.208.69"} = "Ulf Hermjakob (bach.isi.edu)"; - $ip_map{"128.9.208.169"} = "Ulf Hermjakob (brahms.isi.edu)"; - $ip_map{"128.9.184.148"} = "Ulf Hermjakob (beethoven.isi.edu ?)"; - $ip_map{"128.9.184.162"} = "Ulf Hermjakob (beethoven.isi.edu)"; - $ip_map{"128.9.176.39"} = "Kevin Knight"; - $ip_map{"128.9.184.187"} = "Kevin Knight"; - $ip_map{"128.9.216.56"} = "Kevin Knight"; - $ip_map{"128.9.208.155"} = "cage.isi.edu"; - - return ($ip_name = $ip_map{$ip_address}) ? "$ip_address - $ip_name" : $ip_address; -} - -# from standalone de-accent.pl -sub de_accent_string { - local($this, $s) = @_; - - $s =~ tr/A-Z/a-z/; - unless (0) { - # Latin-1 - if ($s =~ /\xC3[\x80-\xBF]/) { - $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g; - $s =~ s/Æ/Ae/g; - $s =~ s/Ç/C/g; - $s =~ s/Ð/D/g; - $s =~ s/(È|É|Ê|Ë)/E/g; - $s =~ s/(Ì|Í|Î|Ï)/I/g; - $s =~ s/Ñ/N/g; - $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g; - $s =~ s/(Ù|Ú|Û|Ü)/U/g; - $s =~ s/Þ/Th/g; - $s =~ s/Ý/Y/g; - $s =~ s/(à|á|â|ã|ä|å)/a/g; - $s =~ s/æ/ae/g; - $s =~ s/ç/c/g; - $s =~ s/(è|é|ê|ë)/e/g; - $s =~ s/(ì|í|î|ï)/i/g; - $s =~ s/ð/d/g; - $s =~ s/ñ/n/g; - $s =~ s/(ò|ó|ô|õ|ö)/o/g; - $s =~ s/ß/ss/g; - $s =~ s/þ/th/g; - $s =~ s/(ù|ú|û|ü)/u/g; - $s =~ s/(ý|ÿ)/y/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/(Ā|Ă|Ą)/A/g; - $s =~ s/(ā|ă|ą)/a/g; - $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g; - $s =~ s/(ć|ĉ|ċ|č)/c/g; - $s =~ s/(Ď|Đ)/D/g; - $s =~ s/(ď|đ)/d/g; - $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g; - $s =~ s/(ē|ĕ|ė|ę|ě)/e/g; - $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g; - $s =~ s/(ĝ|ğ|ġ|ģ)/g/g; - $s =~ s/(Ĥ|Ħ)/H/g; - $s =~ s/(ĥ|ħ)/h/g; - $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g; - $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g; - $s =~ s/IJ/Ij/g; - $s =~ s/ij/ij/g; - $s =~ s/Ĵ/J/g; - $s =~ s/ĵ/j/g; - $s =~ s/Ķ/K/g; - $s =~ s/(ķ|ĸ)/k/g; - $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g; - $s =~ s/(ļ|ľ|ŀ|ł)/l/g; - $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g; - $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g; - $s =~ s/(Ō|Ŏ|Ő)/O/g; - $s =~ s/(ō|ŏ|ő)/o/g; - $s =~ s/Œ/Oe/g; - $s =~ s/œ/oe/g; - $s =~ s/(Ŕ|Ŗ|Ř)/R/g; - $s =~ s/(ŕ|ŗ|ř)/r/g; - $s =~ s/(Ś|Ŝ|Ş|Š)/S/g; - $s =~ s/(ś|ŝ|ş|š|ſ)/s/g; - $s =~ s/(Ţ|Ť|Ŧ)/T/g; - $s =~ s/(ţ|ť|ŧ)/t/g; - $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g; - $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g; - $s =~ s/Ŵ/W/g; - $s =~ s/ŵ/w/g; - $s =~ s/(Ŷ|Ÿ)/Y/g; - $s =~ s/ŷ/y/g; - $s =~ s/(Ź|Ż|Ž)/Z/g; - $s =~ s/(ź|ż|ž)/z/g; - } - # Latin Extended-B - if ($s =~ /[\xC7-\xC7][\x80-\xBF]/) { - $s =~ s/(\xC7\x8D)/A/g; - $s =~ s/(\xC7\x8E)/a/g; - $s =~ s/(\xC7\x8F)/I/g; - $s =~ s/(\xC7\x90)/i/g; - $s =~ s/(\xC7\x91)/O/g; - $s =~ s/(\xC7\x92)/o/g; - $s =~ s/(\xC7\x93)/U/g; - $s =~ s/(\xC7\x94)/u/g; - $s =~ s/(\xC7\x95)/U/g; - $s =~ s/(\xC7\x96)/u/g; - $s =~ s/(\xC7\x97)/U/g; - $s =~ s/(\xC7\x98)/u/g; - $s =~ s/(\xC7\x99)/U/g; - $s =~ s/(\xC7\x9A)/u/g; - $s =~ s/(\xC7\x9B)/U/g; - $s =~ s/(\xC7\x9C)/u/g; - } - # Latin Extended Additional - if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) { - $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g; - $s =~ s/(ḃ|ḅ|ḇ)/b/g; - $s =~ s/(ḉ)/c/g; - $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g; - $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g; - $s =~ s/(ḟ)/f/g; - $s =~ s/(ḡ)/g/g; - $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g; - $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g; - $s =~ s/(ḱ|ḳ|ḵ)/k/g; - $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g; - $s =~ s/(ḿ|ṁ|ṃ)/m/g; - $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g; - $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g; - $s =~ s/(ṕ|ṗ)/p/g; - $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g; - $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g; - $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g; - $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g; - $s =~ s/(ṽ|ṿ)/v/g; - $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g; - $s =~ s/(ẋ|ẍ)/x/g; - $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g; - $s =~ s/(ẑ|ẓ|ẕ)/z/g; - $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g; - $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g; - $s =~ s/(Ḉ)/C/g; - $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g; - $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g; - $s =~ s/(Ḟ)/F/g; - $s =~ s/(Ḡ)/G/g; - $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g; - $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g; - $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g; - $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g; - $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g; - $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g; - $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g; - $s =~ s/(Ṕ|Ṗ)/P/g; - $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g; - $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g; - $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g; - $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g; - $s =~ s/(Ṽ|Ṿ)/V/g; - $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g; - $s =~ s/(Ẍ)/X/g; - $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g; - $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/ά/α/g; - $s =~ s/έ/ε/g; - $s =~ s/ί/ι/g; - $s =~ s/ϊ/ι/g; - $s =~ s/ΐ/ι/g; - $s =~ s/ό/ο/g; - $s =~ s/ύ/υ/g; - $s =~ s/ϋ/υ/g; - $s =~ s/ΰ/υ/g; - $s =~ s/ώ/ω/g; - $s =~ s/Ά/Α/g; - $s =~ s/Έ/Ε/g; - $s =~ s/Ή/Η/g; - $s =~ s/Ί/Ι/g; - $s =~ s/Ϊ/Ι/g; - $s =~ s/Ύ/Υ/g; - $s =~ s/Ϋ/Υ/g; - $s =~ s/Ώ/Ω/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/Ѐ/Е/g; - $s =~ s/Ё/Е/g; - $s =~ s/Ѓ/Г/g; - $s =~ s/Ќ/К/g; - $s =~ s/Ѝ/И/g; - $s =~ s/Й/И/g; - $s =~ s/ѐ/е/g; - $s =~ s/ё/е/g; - $s =~ s/ѓ/г/g; - $s =~ s/ќ/к/g; - $s =~ s/ѝ/и/g; - $s =~ s/й/и/g; - } - } - return $s; -} - -sub read_de_accent_case_resource { - local($this, $filename, *ht, *LOG, $verbose) = @_; - # e.g. data/char-de-accent-lc.txt - - if (open(IN, $filename)) { - my $mode = "de-accent"; - my $line_number = 0; - my $n_de_accent_targets = 0; - my $n_de_accent_sources = 0; - my $n_case_entries = 0; - while () { - s/^\xEF\xBB\xBF//; - s/\s*$//; - $line_number++; - if ($_ =~ /^#+\s*CASE\b/) { - $mode = "case"; - } elsif ($_ =~ /^#+\s*PUNCTUATION NORMALIZATION\b/) { - $mode = "punctuation-normalization"; - } elsif ($_ =~ /^#/) { - # ignore comment - } elsif ($_ =~ /^\s*$/) { - # ignore empty line - } elsif (($mode eq "de-accent") && (($char_without_accent, @chars_with_accent) = split(/\s+/, $_))) { - if (keys %{$ht{DE_ACCENT_INV}->{$char_without_accent}}) { - print LOG "Ignoring duplicate de-accent line for target $char_without_accent in l.$line_number in $filename\n" unless $char_without_accent eq "--"; - } elsif (@chars_with_accent) { - $n_de_accent_targets++; - foreach $char_with_accent (@chars_with_accent) { - my @prev_target_chars = keys %{$ht{DE_ACCENT}->{$char_with_accent}}; - print LOG "Accent character $char_with_accent has duplicate target $char_without_accent (besides @prev_target_chars) in l.$line_number in $filename\n" if @prev_target_chars && (! ($char_without_accent =~ /^[aou]e$/i)); - $char_without_accent = "" if $char_without_accent eq "--"; - $ht{DE_ACCENT}->{$char_with_accent}->{$char_without_accent} = 1; - $ht{DE_ACCENT1}->{$char_with_accent} = $char_without_accent - if (! defined($ht{DE_ACCENT1}->{$char_with_accent})) - && ($char_without_accent =~ /^.[\x80-\xBF]*$/); - $ht{DE_ACCENT_INV}->{$char_without_accent}->{$char_with_accent} = 1; - $ht{UPPER_CASE_OR_ACCENTED}->{$char_with_accent} = 1; - $n_de_accent_sources++; - } - } else { - print LOG "Empty de-accent list for $char_without_accent in l.$line_number in $filename\n"; - } - } elsif (($mode eq "punctuation-normalization") && (($norm_punct, @unnorm_puncts) = split(/\s+/, $_))) { - if (keys %{$ht{NORM_PUNCT_INV}->{$norm_punct}}) { - print LOG "Ignoring duplicate punctuation-normalization line for target $norm_punct in l.$line_number in $filename\n"; - } elsif (@unnorm_puncts) { - foreach $unnorm_punct (@unnorm_puncts) { - my $prev_norm_punct = $ht{NORM_PUNCT}->{$unnorm_punct}; - if ($prev_norm_punct) { - print LOG "Ignoring duplicate punctuation normalization $unnorm_punct -> $norm_punct (besides $prev_norm_punct) in l.$line_number in $filename\n"; - } - $ht{NORM_PUNCT}->{$unnorm_punct} = $norm_punct; - $ht{NORM_PUNCT_INV}->{$norm_punct}->{$unnorm_punct} = 1; - $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$unnorm_punct} = $norm_punct; - } - } - } elsif (($mode eq "case") && (($uc_char, $lc_char) = ($_ =~ /^(\S+)\s+(\S+)\s*$/))) { - $ht{UPPER_TO_LOWER_CASE}->{$uc_char} = $lc_char; - $ht{LOWER_TO_UPPER_CASE}->{$lc_char} = $uc_char; - $ht{UPPER_CASE_P}->{$uc_char} = 1; - $ht{LOWER_CASE_P}->{$lc_char} = 1; - $ht{UPPER_CASE_OR_ACCENTED}->{$uc_char} = 1; - $n_case_entries++; - } else { - print LOG "Unrecognized l.$line_number in $filename\n"; - } - } - foreach $char (keys %{$ht{UPPER_CASE_OR_ACCENTED}}) { - my $lc_char = $ht{UPPER_TO_LOWER_CASE}->{$char}; - $lc_char = $char unless defined($lc_char); - my @de_accend_char_results = sort keys %{$ht{DE_ACCENT}->{$lc_char}}; - my $new_char = (@de_accend_char_results) ? $de_accend_char_results[0] : $lc_char; - $ht{LC_DE_ACCENT_CHAR}->{$char} = $new_char; - $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$char} = $new_char; - } - close(IN); - print LOG "Found $n_case_entries case entries, $n_de_accent_sources/$n_de_accent_targets source/target entries in $line_number lines in file $filename\n" if $verbose; - } else { - print LOG "Can't open $filename\n"; - } -} - -sub de_accent_char { - local($this, $char, *ht, $default) = @_; - - @de_accend_char_results = sort keys %{$ht{DE_ACCENT}->{$char}}; - return (@de_accend_char_results) ? @de_accend_char_results : ($default); -} - -sub lower_case_char { - local($this, $char, *ht, $default) = @_; - - return (defined($lc = $ht{UPPER_TO_LOWER_CASE}->{$char})) ? $lc : $default; -} - -sub lower_case_and_de_accent_char { - local($this, $char, *ht) = @_; - - my $lc_char = $this->lower_case_char($char, *ht, $char); - return $this->de_accent_char($lc_char, *ht, $lc_char); -} - -sub lower_case_and_de_accent_string { - local($this, $string, *ht, $control) = @_; - - # $this->stopwatch("start", "lower_case_and_de_accent_string", *ht, *LOG); - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - my @chars = $this->split_into_utf8_characters($string); - my $result = ""; - foreach $char (@chars) { - my @lc_de_accented_chars = $this->lower_case_and_de_accent_char($char, *ht); - if ($norm_punct_p - && (! @lc_de_accented_chars)) { - my $norm_punct = $ht{NORM_PUNCT}->{$char}; - @lc_de_accented_chars = ($norm_punct) if $norm_punct; - } - $result .= ((@lc_de_accented_chars) ? $lc_de_accented_chars[0] : $char); - } - # $this->stopwatch("end", "lower_case_and_de_accent_string", *ht, *LOG); - return $result; -} - -sub lower_case_and_de_accent_norm_punct { - local($this, $char, *ht) = @_; - - my $new_char = $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$char}; - return (defined($new_char)) ? $new_char : $char; -} - -sub lower_case_and_de_accent_string2 { - local($this, $string, *ht, $control) = @_; - - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - # $this->stopwatch("start", "lower_case_and_de_accent_string2", *ht, *LOG); - my $s = $string; - my $result = ""; - while (($char, $rest) = ($s =~ /^(.[\x80-\xBF]*)(.*)$/)) { - my $new_char = $ht{LC_DE_ACCENT_CHAR}->{$char}; - if (defined($new_char)) { - $result .= $new_char; - } elsif ($norm_punct_p && defined($new_char = $ht{NORM_PUNCT}->{$char})) { - $result .= $new_char; - } else { - $result .= $char; - } - $s = $rest; - } - # $this->stopwatch("end", "lower_case_and_de_accent_string2", *ht, *LOG); - return $result; -} - -sub lower_case_string { - local($this, $string, *ht, $control) = @_; - - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - my $s = $string; - my $result = ""; - while (($char, $rest) = ($s =~ /^(.[\x80-\xBF]*)(.*)$/)) { - my $lc_char = $ht{UPPER_TO_LOWER_CASE}->{$char}; - if (defined($lc_char)) { - $result .= $lc_char; - } elsif ($norm_punct_p && defined($new_char = $ht{NORM_PUNCT}->{$char})) { - $result .= $new_char; - } else { - $result .= $char; - } - $s = $rest; - } - return $result; -} - -sub round_to_n_decimal_places { - local($this, $x, $n, $fill_decimals_p) = @_; - - $fill_decimals_p = 0 unless defined($fill_decimals_p); - unless (defined($x)) { - return $x; - } - if (($x =~ /^-?\d+$/) && (! $fill_decimals_p)) { - return $x; - } - $factor = 1; - foreach $i ((1 .. $n)) { - $factor *= 10; - } - my $rounded_number; - if ($x > 0) { - $rounded_number = (int(($factor * $x) + 0.5) / $factor); - } else { - $rounded_number = (int(($factor * $x) - 0.5) / $factor); - } - if ($fill_decimals_p) { - ($period, $decimals) = ($rounded_number =~ /^-?\d+(\.?)(\d*)$/); - $rounded_number .= "." unless $period || ($n == 0); - foreach ((1 .. ($n - length($decimals)))) { - $rounded_number .= 0; - } - } - return $rounded_number; -} - -sub commify { - local($caller,$number) = @_; - - my $text = reverse $number; - $text =~ s/(\d\d\d)(?=\d)(?!\d*\.)/$1,/g; - return scalar reverse $text; -} - -sub add_javascript_functions { - local($caller,@function_names) = @_; - - $add_javascript_function_s = ""; - foreach $function_name (@function_names) { - - if ($function_name eq "highlight_elems") { - $add_javascript_function_s .= " - function highlight_elems(group_id, value) { - if (group_id != '') { - i = 1; - id = group_id + '-' + i; - while ((s = document.getElementById(id)) != null) { - if (! s.origColor) { - if (s.style.color) { - s.origColor = s.style.color; - } else { - s.origColor = '#000000'; - } - } - if (value == '1') { - s.style.color = '#0000FF'; - if (s.innerHTML == '-') { - s.style.innerHtml = s.innerHTML; - s.innerHTML = '-   ← here'; - s.style.fontWeight = 900; - } else { - s.style.fontWeight = 'bold'; - } - } else { - s.style.fontWeight = 'normal'; - s.style.color = s.origColor; - if (s.style.innerHtml != null) { - s.innerHTML = s.style.innerHtml; - } - } - i = i + 1; - id = group_id + '-' + i; - } - } - } -"; - } elsif ($function_name eq "set_style_for_ids") { - $add_javascript_function_s .= " - function set_style_for_ids(style,id_list) { - var ids = id_list.split(/\\s+/); - var len = ids.length; - var s; - for (var i=0; i>$filename")) { - print OUT $s; - close(OUT); - $result = "Appended"; - } else { - $result = "Can't append"; - } - } else { - if (open(OUT, ">$filename")) { - print OUT $s; - close(OUT); - $result = "Wrote"; - } else { - $result = "Can't write"; - } - } - chmod($mod, $filename) if defined($mod) && -e $filename; - return $result; -} - -sub square { - local($caller, $x) = @_; - - return $x * $x; -} - -sub mutual_info { - local($caller, $ab_count, $a_count, $b_count, $total_count, $smoothing) = @_; - - $smoothing = 1 unless defined($smoothing); - $ab_count = 0 unless defined($ab_count); - return 0 unless $a_count && $b_count && $total_count; - - my $p_ab = $ab_count / $total_count; - my $p_a = $a_count / $total_count; - my $p_b = $b_count / $total_count; - my $expected_ab = $p_a * $p_b * $total_count; - - return -99 unless $expected_ab || $smoothing; - - return CORE::log(($ab_count + $smoothing) / ($expected_ab + $smoothing)); -} - -sub mutual_info_multi { - local($caller, $multi_count, $total_count, $smoothing, @counts) = @_; - - return 0 unless $total_count; - my $p_indivuals = 1; - foreach $count (@counts) { - return 0 unless $count; - $p_indivuals *= ($count / $total_count); - } - my $expected_multi_count = $p_indivuals * $total_count; - # print STDERR "actual vs. expected multi_count($multi_count, $total_count, $smoothing, @counts) = $multi_count vs. $expected_multi_count\n"; - - return -99 unless $expected_multi_count || $smoothing; - - return CORE::log(($multi_count + $smoothing) / ($expected_multi_count + $smoothing)); -} - -sub precision_recall_fmeasure { - local($caller, $n_gold, $n_test, $n_shared, $pretty_print_p) = @_; - - unless (($n_gold =~ /^[1-9]\d*$/) && ($n_test =~ /^[1-9]\d*$/)) { - $zero = ($pretty_print_p) ? "0%" : 0; - if ($n_gold =~ /^[1-9]\d*$/) { - return ("n/a", $zero, $zero); - } elsif ($n_test =~ /^[1-9]\d*$/) { - return ($zero, "n/a", $zero); - } else { - return ("n/a", "n/a", "n/a"); - } - } - my $precision = $n_shared / $n_test; - my $recall = $n_shared / $n_gold; - my $f_measure = ($precision * $recall * 2) / ($precision + $recall); - - return ($precision, $recall, $f_measure) unless $pretty_print_p; - - my $pretty_precision = $caller->round_to_n_decimal_places(100*$precision, 1) . "%"; - my $pretty_recall = $caller->round_to_n_decimal_places(100*$recall, 1) . "%"; - my $pretty_f_measure = $caller->round_to_n_decimal_places(100*$f_measure, 1) . "%"; - - return ($pretty_precision, $pretty_recall, $pretty_f_measure); -} - -sub recapitalize_named_entity { - local($caller, $s) = @_; - - my @comps = (); - foreach $comp (split(/\s+/, $s)) { - if ($comp =~ /^(and|da|for|of|on|the|van|von)$/) { - push(@comps, $comp); - } elsif ($comp =~ /^[a-z]/) { - push(@comps, ucfirst $comp); - } else { - push(@comps, $comp); - } - } - return join(" ", @comps); -} - -sub slot_value_in_double_colon_del_list { - local($this, $s, $slot, $default) = @_; - - $default = "" unless defined($default); - if (($value) = ($s =~ /::$slot\s+(\S.*\S|\S)\s*$/)) { - $value =~ s/\s*::\S.*\s*$//; - return $value; - } else { - return $default; - } -} - -sub synt_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::synt\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub form_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::form\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub lex_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::lex\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub multi_slot_value_in_double_colon_del_list { - # e.g. when there are multiple slot/value pairs in a line, e.g. ::eng ... :eng ... - local($this, $s, $slot) = @_; - - @values = (); - while (($value, $rest) = ($s =~ /::$slot\s+(\S|\S.*?\S)(\s+::\S.*|\s*)$/)) { - push(@values, $value); - $s = $rest; - } - return @values; -} - -sub remove_slot_in_double_colon_del_list { - local($this, $s, $slot) = @_; - - $s =~ s/::$slot(?:|\s+\S|\s+\S.*?\S)(\s+::\S.*|\s*)$/$1/; - $s =~ s/^\s*//; - return $s; -} - -sub extract_split_info_from_split_dir { - local($this, $dir, *ht) = @_; - - my $n_files = 0; - my $n_snt_ids = 0; - if (opendir(DIR, $dir)) { - my @filenames = sort readdir(DIR); - closedir(DIR); - foreach $filename (@filenames) { - next unless $filename =~ /\.txt$/; - my $split_class; - if (($split_class) = ($filename =~ /-(dev|training|test)-/)) { - my $full_filename = "$dir/$filename"; - if (open(IN, $full_filename)) { - my $old_n_snt_ids = $n_snt_ids; - while () { - if (($snt_id) = ($_ =~ /^#\s*::id\s+(\S+)/)) { - if ($old_split_class = $ht{SPLIT_CLASS}->{$snt_id}) { - unless ($old_split_class eq $split_class) { - print STDERR "Conflicting split class for $snt_id: $old_split_class $split_class\n"; - } - } else { - $ht{SPLIT_CLASS}->{$snt_id} = $split_class; - $ht{SPLIT_CLASS_COUNT}->{$split_class} = ($ht{SPLIT_CLASS_COUNT}->{$split_class} || 0) + 1; - $n_snt_ids++; - } - } - } - $n_files++ unless $n_snt_ids == $old_n_snt_ids; - close(IN); - } else { - print STDERR "Can't open file $full_filename"; - } - } else { - print STDERR "Skipping file $filename when extracting split info from $dir\n"; - } - } - print STDERR "Extracted $n_snt_ids split classes from $n_files files.\n"; - } else { - print STDERR "Can't open directory $dir to extract split info.\n"; - } -} - -sub extract_toks_for_split_class_from_dir { - local($this, $dir, *ht, $split_class, $control) = @_; - - $control = "" unless defined($control); - $print_snt_id_p = ($control =~ /\bwith-snt-id\b/); - my $n_files = 0; - my $n_snts = 0; - if (opendir(DIR, $dir)) { - my @filenames = sort readdir(DIR); - closedir(DIR); - foreach $filename (@filenames) { - next unless $filename =~ /^alignment-release-.*\.txt$/; - my $full_filename = "$dir/$filename"; - if (open(IN, $full_filename)) { - my $old_n_snts = $n_snts; - my $snt_id = ""; - while () { - if (($s_value) = ($_ =~ /^#\s*::id\s+(\S+)/)) { - $snt_id = $s_value; - $proper_split_class_p - = ($this_split_class = $ht{SPLIT_CLASS}->{$snt_id}) - && ($this_split_class eq $split_class); - } elsif (($tok) = ($_ =~ /^#\s*::tok\s+(\S|\S.*\S)\s*$/)) { - if ($proper_split_class_p) { - print "$snt_id " if $print_snt_id_p; - print "$tok\n"; - $n_snts++; - } - } - } - $n_files++ unless $n_snts == $old_n_snts; - close(IN); - } else { - print STDERR "Can't open file $full_filename"; - } - } - print STDERR "Extracted $n_snts tokenized sentences ($split_class) from $n_files files.\n"; - } else { - print STDERR "Can't open directory $dir to extract tokens.\n"; - } -} - -sub load_relevant_tok_ngram_corpus { - local($this, $filename, *ht, $max_lex_rule_span, $ngram_count_min, $optional_ngram_output_filename) = @_; - - $ngram_count_min = 1 unless $ngram_count_min; - $max_lex_rule_span = 10 unless $max_lex_rule_span; - my $n_ngram_instances = 0; - my $n_ngram_types = 0; - if (open(IN, $filename)) { - while () { - s/\s*$//; - @tokens = split(/\s+/, $_); - foreach $from_token_index ((0 .. $#tokens)) { - foreach $to_token_index (($from_token_index .. ($from_token_index + $max_lex_rule_span -1))) { - last if $to_token_index > $#tokens; - my $ngram = join(" ", @tokens[$from_token_index .. $to_token_index]); - $ht{RELEVANT_NGRAM}->{$ngram} = ($ht{RELEVANT_NGRAM}->{$ngram} || 0) + 1; - } - } - } - close(IN); - if ($optional_ngram_output_filename && open(OUT, ">$optional_ngram_output_filename")) { - foreach $ngram (sort keys %{$ht{RELEVANT_NGRAM}}) { - $count = $ht{RELEVANT_NGRAM}->{$ngram}; - next unless $count >= $ngram_count_min; - print OUT "($count) $ngram\n"; - $n_ngram_types++; - $n_ngram_instances += $count; - } - close(OUT); - print STDERR "Extracted $n_ngram_types ngram types, $n_ngram_instances ngram instances.\n"; - print STDERR "Wrote ngram stats to $optional_ngram_output_filename\n"; - } - } else { - print STDERR "Can't open relevant tok ngram corpus $filename\n"; - } -} - -sub load_relevant_tok_ngrams { - local($this, $filename, *ht) = @_; - - my $n_entries = 0; - if (open(IN, $filename)) { - while () { - s/\s*$//; - if (($count, $ngram) = ($_ =~ /^\((\d+)\)\s+(\S|\S.*\S)\s*$/)) { - $lc_ngram = lc $ngram; - $ht{RELEVANT_NGRAM}->{$lc_ngram} = ($ht{RELEVANT_NGRAM}->{$lc_ngram} || 0) + $count; - $ht{RELEVANT_LC_NGRAM}->{$lc_ngram} = ($ht{RELEVANT_LC_NGRAM}->{$lc_ngram} || 0) + $count; - $n_entries++; - } - } - close(IN); - print STDERR "Read in $n_entries entries from $filename\n"; - } else { - print STDERR "Can't open relevant tok ngrams from $filename\n"; - } -} - -sub snt_id_sort_function { - local($this, $a, $b) = @_; - - if ((($core_a, $index_a) = ($a =~ /^(\S+)\.(\d+)$/)) - && (($core_b, $index_b) = ($b =~ /^(\S+)\.(\d+)$/))) { - return ($core_a cmp $core_b) || ($index_a <=> $index_b); - } else { - return $a cmp $b; - } -} - -sub count_value_sort_function { - local($this, $a_count, $b_count, $a_value, $b_value, $control) = @_; - - # normalize fractions such as "1/2" - if ($a_count > $b_count) { - return ($control eq "decreasing") ? -1 : 1; - } elsif ($b_count > $a_count) { - return ($control eq "decreasing") ? 1 : -1; - } - $a_value = $num / $den if ($num, $den) = ($a_value =~ /^([1-9]\d*)\/([1-9]\d*)$/); - $b_value = $num / $den if ($num, $den) = ($b_value =~ /^([1-9]\d*)\/([1-9]\d*)$/); - $a_value =~ s/:/\./ if $a_value =~ /^\d+:\d+$/; - $b_value =~ s/:/\./ if $b_value =~ /^\d+:\d+$/; - if (($a_value =~ /^-?\d+(\.\d+)?$/) - && ($b_value =~ /^-?\d+(\.\d+)?$/)) { - return $a_value <=> $b_value; - } elsif ($a_value =~ /^-?\d+(\.\d+)?$/) { - return 1; - } elsif ($b_value =~ /^-?\d+(\.\d+)?$/) { - return -1; - } else { - return $a_value cmp $b_value; - } -} - -sub undef_to_blank { - local($this, $x) = @_; - - return (defined($x)) ? $x : ""; -} - -sub en_lex_amr_list { - local($this, $s) = @_; - - $bpe = qr{ \( (?: (?> [^()]+ ) | (??{ $bpe }))* \) }x; # see Perl Cookbook 2nd ed. p. 218 - @en_lex_amr_list = (); - my $amr_s; - my $lex; - my $test; - while ($s =~ /\S/) { - $s =~ s/^\s*//; - if (($s =~ /^\([a-z]\d* .*\)/) - && (($amr_s, $rest) = ($s =~ /^($bpe)(\s.*|)$/))) { - push(@en_lex_amr_list, $amr_s); - $s = $rest; - } elsif (($lex, $rest) = ($s =~ /^\s*(\S+)(\s.*|)$/)) { - push(@en_lex_amr_list, $lex); - $s = $rest; - } else { - print STDERR "en_lex_amr_list can't process: $s\n"; - $s = ""; - } - } - return @en_lex_amr_list; -} - -sub make_sure_dir_exists { - local($this, $dir, $umask) = @_; - - mkdir($dir, $umask) unless -d $dir; - chmod($umask, $dir); -} - -sub pretty_percentage { - local($this, $numerator, $denominator) = @_; - - return ($denominator == 0) ? "n/a" : ($this->round_to_n_decimal_places(100*$numerator/$denominator, 2) . "%"); -} - -sub html_color_nth_line { - local($this, $s, $n, $color, $delimiter) = @_; - - $delimiter = "
        " unless defined($delimiter); - @lines = split($delimiter, $s); - $lines[$n] = "" . $lines[$n] . "" if ($n =~ /^\d+$/) && ($n <= $#lines); - return join($delimiter, @lines); -} - -sub likely_valid_url_format { - local($this, $url) = @_; - - $url = lc $url; - return 0 if $url =~ /\s/; - return 0 if $url =~ /[@]/; - return 1 if $url =~ /^https?:\/\/.+\.[a-z]+(\?.+)?$/; - return 1 if $url =~ /[a-z].+\.(com|edu|gov|net|org)$/; - return 0; -} - -# see also EnglMorph->special_token_type -$common_file_suffixes = "aspx?|bmp|cgi|docx?|gif|html?|jpeg|jpg|mp3|mp4|pdf|php|png|pptx?|stm|svg|txt|xml"; -$common_top_domain_suffixes = "museum|info|cat|com|edu|gov|int|mil|net|org|ar|at|au|be|bg|bi|br|ca|ch|cn|co|cz|de|dk|es|eu|fi|fr|gr|hk|hu|id|ie|il|in|ir|is|it|jp|ke|kr|lu|mg|mx|my|nl|no|nz|ph|pl|pt|ro|rs|ru|rw|se|sg|sk|so|tr|tv|tw|tz|ua|ug|uk|us|za"; - -sub token_is_url_p { - local($this, $token) = @_; - - return 1 if $token =~ /^www(\.[a-z0-9]([-a-z0-9_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+)+\.([a-z]{2,2}|$common_top_domain_suffixes)(\/(\.{1,3}|[a-z0-9]([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+))*(\/[a-z0-9_][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 1 if $token =~ /^https?:\/\/([a-z]\.)?([a-z0-9]([-a-z0-9_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+\.)+[a-z]{2,}(\/(\.{1,3}|([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+))*(\/[a-z_][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 1 if $token =~ /^[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\.($common_top_domain_suffixes)(\/[a-z0-9]([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+)*(\/[a-z][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 0; -} - -sub token_is_email_p { - local($this, $token) = @_; - - return ($token =~ /^[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\@[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\.($common_top_domain_suffixes)$/i); -} - -sub token_is_filename_p { - local($this, $token) = @_; - - return 1 if $token =~ /\.($common_file_suffixes)$/; - return 0; -} - -sub token_is_xml_token_p { - local($this, $token) = @_; - - return ($token =~ /^&(amp|apos|gt|lt|nbsp|quot|&#\d+|&#x[0-9A-F]+);$/i); -} - -sub token_is_handle_p { - local($this, $token) = @_; - - return ($token =~ /^\@[a-z][_a-z0-9]*[a-z0-9]$/i); -} - -sub min { - local($this, @list) = @_; - - my $min = ""; - foreach $item (@list) { - $min = $item if ($item =~ /^-?\d+(?:\.\d*)?$/) && (($min eq "") || ($item < $min)); - } - return $min; -} - -sub max { - local($this, @list) = @_; - - my $max = ""; - foreach $item (@list) { - $max = $item if defined($item) && ($item =~ /^-?\d+(?:\.\d*)?(e[-+]\d+)?$/) && (($max eq "") || ($item > $max)); - } - return $max; -} - -sub split_tok_s_into_tokens { - local($this, $tok_s) = @_; - - @token_list = (); - while (($pre, $link_token, $post) = ($tok_s =~ /^(.*?)\s*(\@?<[^<>]+>\@?)\s*(.*)$/)) { - # generate dummy token for leading blank(s) - if (($tok_s =~ /^\s/) && ($pre eq "") && ($#token_list < 0)) { - push(@token_list, ""); - } else { - push(@token_list, split(/\s+/, $pre)); - } - push(@token_list, $link_token); - $tok_s = $post; - } - push(@token_list, split(/\s+/, $tok_s)); - return @token_list; -} - -sub shuffle { - local($this, @list) = @_; - - @shuffle_list = (); - while (@list) { - $len = $#list + 1; - $rand_position = int(rand($len)); - push(@shuffle_list, $list[$rand_position]); - splice(@list, $rand_position, 1); - } - $s = join(" ", @shuffle_list); - return @shuffle_list; -} - -sub timestamp_to_seconds { - local($this, $timestamp) = @_; - - my $epochtime; - if (($year, $month, $day, $hour, $minute, $second) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)$/)) { - $epochtime = timelocal($second, $minute, $hour, $day, $month-1, $year); - } elsif (($year, $month, $day) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)$/)) { - $epochtime = timelocal(0, 0, 0, $day, $month-1, $year); - } elsif (($year, $month, $day, $hour, $minute, $second, $second_fraction) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)\.(\d+)$/)) { - $epochtime = timelocal($second, $minute, $hour, $day, $month-1, $year) + ($second_fraction / (10 ** length($second_fraction))); - } else { - $epochtime = 0; - } - return $epochtime; -} - -sub timestamp_diff_in_seconds { - local($this, $timestamp1, $timestamp2) = @_; - - my $epochtime1 = $this->timestamp_to_seconds($timestamp1); - my $epochtime2 = $this->timestamp_to_seconds($timestamp2); - return $epochtime2 - $epochtime1; -} - -sub dirhash { - # maps string to hash of length 4 with characters [a-z2-8] (shorter acc. to $len) - local($this, $s, $len) = @_; - - $hash = 9999; - $mega = 2 ** 20; - $mega1 = $mega - 1; - $giga = 2 ** 26; - foreach $c (split //, $s) { - $hash = $hash*33 + ord($c); - $hash = ($hash >> 20) ^ ($hash & $mega1) if $hash >= $giga; - } - while ($hash >= $mega) { - $hash = ($hash >> 20) ^ ($hash & $mega1); - } - $result = ""; - while ($hash) { - $c = $hash & 31; - $result .= CORE::chr($c + (($c >= 26) ? 24 : 97)); - $hash = $hash >> 5; - } - while (length($result) < 4) { - $result .= "8"; - } - return substr($result, 0, $len) if $len; - return $result; -} - -sub full_path_python { - - foreach $bin_path (split(":", "/usr/sbin:/usr/bin:/bin:/usr/local/bin")) { - return $python if -x ($python = "$bin_path/python"); - } - return "python"; -} - -sub string_contains_unbalanced_paras { - local($this, $s) = @_; - - return 0 unless $s =~ /[(){}\[\]]/; - $rest = $s; - while (($pre,$left,$right,$post) = ($rest =~ /^(.*)([({\[]).*?([\]})])(.*)$/)) { - return 1 unless (($left eq "(") && ($right eq ")")) - || (($left eq "[") && ($right eq "]")) - || (($left eq "{") && ($right eq "}")); - $rest = "$pre$post"; - } - return 1 if $rest =~ /[(){}\[\]]/; - return 0; -} - -sub dequote_string { - local($this, $s) = @_; - - if ($s =~ /^".*"$/) { - $s = substr($s, 1, -1); - $s =~ s/\\"/"/g; - return $s; - } elsif ($s =~ /^'.*'$/) { - $s = substr($s, 1, -1); - $s =~ s/\\'/'/g; - return $s; - } else { - return $s; - } -} - -sub defined_non_space { - local($this, $s) = @_; - - return (defined($s) && ($s =~ /\S/)); -} - -sub default_if_undefined { - local($this, $s, $default) = @_; - - return (defined($s) ? $s : $default); -} - -sub remove_empties { - local($this, @list) = @_; - - @filtered_list = (); - foreach $elem (@list) { - push(@filtered_list, $elem) if defined($elem) && (! ($elem =~ /^\s*$/)) && (! $this->member($elem, @filtered_list)); - } - - return @filtered_list; -} - -# copied from AMRexp.pm -sub new_var_for_surf_amr { - local($this, $amr_s, $s) = @_; - - my $letter = ($s =~ /^[a-z]/i) ? lc substr($s, 0, 1) : "x"; - return $letter unless ($amr_s =~ /:\S+\s+\($letter\s+\//) - || ($amr_s =~ /\s\($letter\s+\//) - || ($amr_s =~ /^\s*\($letter\s+\//); # ))) - my $i = 2; - while (($amr_s =~ /:\S+\s+\($letter$i\s+\//) - || ($amr_s =~ /\s+\($letter$i\s+\//) - || ($amr_s =~ /^\s*\($letter$i\s+\//)) { # ))) - $i++; - } - return "$letter$i"; -} - -# copied from AMRexp.pm -sub new_vars_for_surf_amr { - local($this, $amr_s, $ref_amr_s) = @_; - - my $new_amr_s = ""; - my %new_var_ht = (); - my $remaining_amr_s = $amr_s; - my $pre; my $var; my $concept; my $post; - while (($pre, $var, $concept, $post) = ($remaining_amr_s =~ /^(.*?\()([a-z]\d*)\s+\/\s+([^ ()\s]+)(.*)$/s)) { - $new_var = $this->new_var_for_surf_amr("$ref_amr_s $new_amr_s", $concept); - $new_var_ht{$var} = $new_var; - $new_amr_s .= "$pre$new_var / $concept"; - $remaining_amr_s = $post; - } - $new_amr_s .= $remaining_amr_s; - - # also update any reentrancy variables - $remaining_amr_s = $new_amr_s; - $new_amr_s2 = ""; - while (($pre, $var, $post) = ($remaining_amr_s =~ /^(.*?:\S+\s+)([a-z]\d*)([ ()\s].*)$/s)) { - $new_var = $new_var_ht{$var} || $var; - $new_amr_s2 .= "$pre$new_var"; - $remaining_amr_s = $post; - } - $new_amr_s2 .= $remaining_amr_s; - - return $new_amr_s2; -} - -sub update_inner_span_for_id { - local($this, $html_line, $slot, $new_value) = @_; - # e.g. slot: workset-language-name value: Uyghur - - if (defined($new_value) - && (($pre, $old_value, $post) = ($html_line =~ /^(.*]* id="$slot"[^<>]*>)([^<>]*)(<\/span\b[^<>]*>.*)$/i)) - && ($old_value ne $new_value)) { - # print STDERR "Inserting new $slot $old_value -> $new_value\n"; - return $pre . $new_value . $post . "\n"; - } else { - # no change - return $html_line; - } -} - -sub levenshtein_distance { - local($this, $s1, $s2) = @_; - - my $i; - my $j; - my @distance; - my @s1_chars = $utf8->split_into_utf8_characters($s1, "return only chars", *empty_ht); - my $s1_length = $#s1_chars + 1; - my @s2_chars = $utf8->split_into_utf8_characters($s2, "return only chars", *empty_ht); - my $s2_length = $#s2_chars + 1; - for ($i = 0; $i <= $s1_length; $i++) { - $distance[$i][0] = $i; - } - for ($j = 1; $j <= $s2_length; $j++) { - $distance[0][$j] = $j; - } - for ($j = 1; $j <= $s2_length; $j++) { - for ($i = 1; $i <= $s1_length; $i++) { - my $substitution_cost = ($s1_chars[$i-1] eq $s2_chars[$j-1]) ? 0 : 1; - $distance[$i][$j] = $this->min($distance[$i-1][$j] + 1, - $distance[$i][$j-1] + 1, - $distance[$i-1][$j-1] + $substitution_cost); - # print STDERR "SC($i,$j) = $substitution_cost\n"; - # $d = $distance[$i][$j]; - # print STDERR "D($i,$j) = $d\n"; - } - } - return $distance[$s1_length][$s2_length]; -} - -sub markup_parts_of_string_in_common_with_ref { - local($this, $s, $ref, $start_markup, $end_markup, $deletion_markup, $verbose) = @_; - - # \x01 temporary start-markup - # \x02 temporary end-markup - # \x03 temporary deletion-markup - $s =~ s/[\x01-\x03]//g; - $ref =~ s/[\x01-\x03]//g; - my $i; - my $j; - my @distance; - my @s_chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - my $s_length = $#s_chars + 1; - my @ref_chars = $utf8->split_into_utf8_characters($ref, "return only chars", *empty_ht); - my $ref_length = $#ref_chars + 1; - $distance[0][0] = 0; - $del_ins_subst_op[0][0] = "-"; - for ($i = 1; $i <= $s_length; $i++) { - $distance[$i][0] = $i; - $del_ins_subst_op[$i][0] = 0; - } - for ($j = 1; $j <= $ref_length; $j++) { - $distance[0][$j] = $j; - $del_ins_subst_op[0][$j] = 1; - } - for ($j = 1; $j <= $ref_length; $j++) { - for ($i = 1; $i <= $s_length; $i++) { - my $substitution_cost = (($s_chars[$i-1] eq $ref_chars[$j-1])) ? 0 : 1; - my @del_ins_subst_list = ($distance[$i-1][$j] + 1, - $distance[$i][$j-1] + 1, - $distance[$i-1][$j-1] + $substitution_cost); - my $min = $this->min(@del_ins_subst_list); - my $del_ins_subst_position = $this->position($min, @del_ins_subst_list); - $distance[$i][$j] = $min; - $del_ins_subst_op[$i][$j] = $del_ins_subst_position; - } - } - $d = $distance[$s_length][$ref_length]; - print STDERR "markup_parts_of_string_in_common_with_ref LD($s,$ref) = $d\n" if $verbose; - for ($j = 0; $j <= $ref_length; $j++) { - for ($i = 0; $i <= $s_length; $i++) { - $d = $distance[$i][$j]; - $op = $del_ins_subst_op[$i][$j]; - print STDERR "$d($op) " if $verbose; - } - print STDERR "\n" if $verbose; - } - my $result = ""; - my $i_end = $s_length; - my $j_end = $ref_length; - my $cost = $distance[$i_end][$j_end]; - $i = $i_end; - $j = $j_end; - while (1) { - $result2 = $result; - $result2 =~ s/\x01/$start_markup/g; - $result2 =~ s/\x02/$end_markup/g; - $result2 =~ s/\x03/$deletion_markup/g; - print STDERR "i:$i i-end:$i_end j:$j j-end:$j_end r: $result2\n" if $verbose; - # matching characters - if ($i && $j && ($del_ins_subst_op[$i][$j] == 2) && ($distance[$i-1][$j-1] == $distance[$i][$j])) { - $i--; - $j--; - } else { - # previously matching characters - if (($i < $i_end) && ($j < $j_end)) { - my $sub_s = join("", @s_chars[$i .. $i_end-1]); - $result = "\x01" . $sub_s . "\x02" . $result; - } - # character substitution - if ($i && $j && ($del_ins_subst_op[$i][$j] == 2)) { - $i--; - $j--; - $result = $s_chars[$i] . $result; - } elsif ($i && ($del_ins_subst_op[$i][$j] == 0)) { - $i--; - $result = $s_chars[$i] . $result; - } elsif ($j && ($del_ins_subst_op[$i][$j] == 1)) { - $j--; - $result = "\x03" . $result; - } else { - last; - } - $i_end = $i; - $j_end = $j; - } - } - $result2 = $result; - $result2 =~ s/\x01/$start_markup/g; - $result2 =~ s/\x02/$end_markup/g; - $result2 =~ s/\x03/$deletion_markup/g; - print STDERR "i:$i i-end:$i_end j:$j j-end:$j_end r: $result2 *\n" if $verbose; - $result =~ s/(\x02)\x03+(\x01)/$1$deletion_markup$2/g; - $result =~ s/(\x02)\x03+$/$1$deletion_markup/g; - $result =~ s/^\x03+(\x01)/$deletion_markup$1/g; - $result =~ s/\x03//g; - $result =~ s/\x01/$start_markup/g; - $result =~ s/\x02/$end_markup/g; - return $result; -} - -sub env_https { - my $https = $ENV{'HTTPS'}; - return 1 if $https && ($https eq "on"); - - my $http_via = $ENV{'HTTP_VIA'}; - return 1 if $http_via && ($http_via =~ /\bHTTPS\b.* \d+(?:\.\d+){3,}:443\b/); # tmp for beta.isi.edu - - return 0; -} - -sub env_http_host { - return $ENV{'HTTP_HOST'} || ""; -} - -sub env_script_filename { - return $ENV{'SCRIPT_FILENAME'} || ""; -} - -sub cgi_mt_app_root_dir { - local($this, $target) = @_; - my $s; - if ($target =~ /filename/i) { - $s = $ENV{'SCRIPT_FILENAME'} || ""; - } else { - $s = $ENV{'SCRIPT_NAME'} || ""; - } - return "" unless $s; - return $d if ($d) = ($s =~ /^(.*?\/(?:amr-editor|chinese-room-editor|utools|romanizer\/version\/[-.a-z0-9]+|romanizer))\//); - return $d if ($d) = ($s =~ /^(.*)\/(?:bin|src|scripts?)\/[^\/]*$/); - return $d if ($d) = ($s =~ /^(.*)\/[^\/]*$/); - return ""; -} - -sub parent_dir { - local($this, $dir) = @_; - - $dir =~ s/\/[^\/]+\/?$//; - return $dir || "/"; -} - -sub span_start { - local($this, $span, $default) = @_; - - $default = "" unless defined($default); - return (($start) = ($span =~ /^(\d+)-\d+$/)) ? $start : $default; -} - -sub span_end { - local($this, $span, $default) = @_; - - $default = "" unless defined($default); - return (($end) = ($span =~ /^\d+-(\d+)$/)) ? $end : $default; -} - -sub oct_mode { - local($this, $filename) = @_; - - @stat = stat($filename); - return "" unless @stat; - $mode = $stat[2]; - $oct_mode = sprintf("%04o", $mode & 07777); - return $oct_mode; -} - -sub csv_to_list { - local($this, $s, $control_string) = @_; - # Allow quoted string such as "Wait\, what?" as element with escaped comma inside. - - $control_string = "" unless defined($control_string); - $strip_p = ($control_string =~ /\bstrip\b/); - $allow_simple_commas_in_quote = ($control_string =~ /\bsimple-comma-ok\b/); - $ignore_empty_elem_p = ($control_string =~ /\bno-empty\b/); - @cvs_list = (); - while ($s ne "") { - if ((($elem, $rest) = ($s =~ /^"((?:\\[,\"]|[^,\"][\x80-\xBF]*)*)"(,.*|)$/)) - || ($allow_simple_commas_in_quote - && (($elem, $rest) = ($s =~ /^"((?:\\[,\"]|[^\"][\x80-\xBF]*)*)"(,.*|)$/))) - || (($elem, $rest) = ($s =~ /^([^,]*)(,.*|\s*)$/)) - || (($elem, $rest) = ($s =~ /^(.*)()$/))) { - if ($strip_p) { - $elem =~ s/^\s*//; - $elem =~ s/\s*$//; - } - push(@cvs_list, $elem) unless $ignore_empty_elem_p && ($elem eq ""); - $rest =~ s/^,//; - $s = $rest; - } else { - print STDERR "Error in csv_to_list processing $s\n"; - last; - } - } - return @cvs_list; -} - -sub kl_divergence { - local($this, $distribution_id, $gold_distribution_id, *ht, $smoothing) = @_; - - my $total_count = $ht{DISTRIBUTION_TOTAL_COUNT}->{$distribution_id}; - my $total_gold_count = $ht{DISTRIBUTION_TOTAL_COUNT}->{$gold_distribution_id}; - return unless $total_count && $total_gold_count; - - my @values = keys %{$ht{DISTRIBUTION_VALUE_COUNT}->{$gold_distribution_id}}; - my $n_values = $#values + 1; - - my $min_total_count = $this->min($total_count, $total_gold_count); - $smoothing = 1 - (10000/((100+$min_total_count)**2)) unless defined($smoothing); - return unless $smoothing; - my $smoothed_n_values = $smoothing * $n_values; - my $divergence = 0; - foreach $value (@values) { - my $count = $ht{DISTRIBUTION_VALUE_COUNT}->{$distribution_id}->{$value} || 0; - my $gold_count = $ht{DISTRIBUTION_VALUE_COUNT}->{$gold_distribution_id}->{$value}; - my $p = ($count + $smoothing) / ($total_count + $smoothed_n_values); - my $q = ($gold_count + $smoothing) / ($total_gold_count + $smoothed_n_values); - if ($p == 0) { - # no impact on divergence - } elsif ($q) { - my $incr = $p * CORE::log($p/$q); - $divergence += $incr; - my $incr2 = $this->round_to_n_decimal_places($incr, 5); - my $p2 = $this->round_to_n_decimal_places($p, 5); - my $q2 = $this->round_to_n_decimal_places($q, 5); - $incr2 = "+" . $incr2 if $incr > 0; - $log = " value: $value count: $count gold_count: $gold_count p: $p2 q: $q2 $incr2\n"; - $ht{KL_DIVERGENCE_LOG}->{$distribution_id}->{$gold_distribution_id}->{$value} = $log; - $ht{KL_DIVERGENCE_INCR}->{$distribution_id}->{$gold_distribution_id}->{$value} = $incr; - } else { - $divergence += 999; - } - } - return $divergence; -} - -sub read_ISO_8859_named_entities { - local($this, *ht, $filename, $verbose) = @_; - # e.g. from /nfs/isd/ulf/arabic/data/ISO-8859-1-HTML-named-entities.txt - # - # - # - # - # - # - - my $n = 0; - if (open(IN, $filename)) { - while () { - s/^\xEF\xBB\xBF//; - if (($name, $dec_unicode) = ($_ =~ /^{$name} = $dec_unicode; - $ht{HTML_ENTITY_DECUNICODE_TO_NAME}->{$dec_unicode} = $name; - $ht{HTML_ENTITY_NAME_TO_UTF8}->{$name} = $utf8->unicode2string($dec_unicode); - $n++; - # print STDERR "read_ISO_8859_named_entities $name $dec_unicode .\n" if $name =~ /dash/; - } - } - close(IN); - print STDERR "Loaded $n entries from $filename\n" if $verbose; - } else { - print STDERR "Could not open $filename\n" if $verbose; - } -} - -sub neg { - local($this, $x) = @_; - - # robust - return (defined($x) && ($x =~ /^-?\d+(?:\.\d+)?$/)) ? (- $x) : $x; -} - -sub read_ttable_gloss_data { - local($this, $filename, $lang_code, *ht, $direction) = @_; - # e.g. /nfs/isd/ulf/croom/oov-lanpairs/som-eng/som-eng-ttable-glosses.txt - - $direction = "f to e" unless defined($direction); - if (open(IN, $filename)) { - while () { - if (($headword, $gloss) = ($_ =~ /^(.*?)\t(.*?)\s*$/)) { - if ($direction eq "e to f") { - $ht{TTABLE_E_GLOSS}->{$lang_code}->{$headword} = $gloss; - } else { - $ht{TTABLE_F_GLOSS}->{$lang_code}->{$headword} = $gloss; - } - } - } - close(IN); - } -} - -sub format_gloss_for_tooltop { - local($this, $gloss) = @_; - - $gloss =~ s/^\s*/\t/; - $gloss =~ s/\s*$//; - $gloss =~ s/ / /g; - $gloss =~ s/\t/ /g; - return $gloss; -} - -sub obsolete_tooltip { - local($this, $s, $lang_code, *ht) = @_; - - return $gloss if defined($gloss = $ht{TTABLE_F_GLOSS}->{$lang_code}->{$s}); - @e_s = sort { $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$b} - <=> $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$a} } - keys %{$ht{T_TABLE_F_E_C}->{$lang_code}->{$s}}; - if (@e_s) { - $e = shift @e_s; - $count = $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$e}; - $min_count = $this->max($count * 0.01, 1.0); - $count =~ s/(\.\d\d)\d*$/$1/; - $result = "$s: $e ($count)"; - $n = 1; - while (@e_s) { - $e = shift @e_s; - $count = $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$e}; - last if $count < $min_count; - $count =~ s/(\.\d\d)\d*$/$1/; - $result .= " $e ($count)"; - $n++; - last if $n >= 10; - } - $ht{TTABLE_F_GLOSS}->{$lang_code}->{$s} = $result; - return $result; - } else { - return ""; - } -} - -sub markup_html_line_init { - local($this, $s, *ht, $id) = @_; - - my @chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - $ht{S}->{$id} = $s; -} - -sub markup_html_line_regex { - local($this, $id, *ht, $regex, $m_slot, $m_value, *LOG) = @_; - - unless ($regex eq "") { - my $s = $ht{S}->{$id}; - my $current_pos = 0; - while (($pre, $match_s, $post) = ($s =~ /^(.*?)($regex)(.*)$/)) { - $current_pos += $utf8->length_in_utf8_chars($pre); - my $match_len = $utf8->length_in_utf8_chars($match_s); - $ht{START}->{$id}->{$current_pos}->{$m_slot}->{$m_value} = 1; - $ht{STOP}->{$id}->{($current_pos+$match_len)}->{$m_slot}->{$m_value} = 1; - $current_pos += $match_len; - $s = $post; - } - } -} - -sub html_markup_line { - local($this, $id, *ht, *LOG) = @_; - - my @titles = (); - my @colors = (); - my @text_decorations = (); - - my $s = $ht{S}->{$id}; - # print LOG "html_markup_line $id: $s\n"; - my @chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - my $markedup_s = ""; - - my $new_title = ""; - my $new_color = ""; - my $new_text_decoration = ""; - my $n_spans = 0; - my $i; - foreach $i ((0 .. ($#chars+1))) { - my $stop_span_p = 0; - foreach $m_slot (keys %{$ht{STOP}->{$id}->{$i}}) { - foreach $m_value (keys %{$ht{STOP}->{$id}->{$i}->{$m_slot}}) { - if ($m_slot eq "title") { - my $last_positition = $this->last_position($m_value, @titles); - splice(@titles, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } elsif ($m_slot eq "color") { - my $last_positition = $this->last_position($m_value, @colors); - splice(@colors, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } elsif ($m_slot eq "text-decoration") { - my $last_positition = $this->last_position($m_value, @text_decorations); - splice(@text_decorations, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } - } - } - if ($stop_span_p) { - $markedup_s .= ""; - $n_spans--; - } - my $start_span_p = 0; - foreach $m_slot (keys %{$ht{START}->{$id}->{$i}}) { - foreach $m_value (keys %{$ht{START}->{$id}->{$i}->{$m_slot}}) { - if ($m_slot eq "title") { - push(@titles, $m_value); - $start_span_p = 1; - } elsif ($m_slot eq "color") { - push(@colors, $m_value); - $start_span_p = 1; - } elsif ($m_slot eq "text-decoration") { - push(@text_decorations, $m_value); - $start_span_p = 1; - } - } - } - if ($stop_span_p || $start_span_p) { - my $new_title = (@titles) ? $titles[$#titles] : ""; - my $new_color = (@colors) ? $colors[$#colors] : ""; - my $new_text_decoration = (@text_decorations) ? $text_decorations[$#text_decorations] : ""; - if ($new_title || $new_color || $new_text_decoration) { - my $args = ""; - if ($new_title) { - $g_title = $this->guard_html_quote($new_title); - $args .= " title=\"$g_title\""; - } - if ($new_color || $new_text_decoration) { - $g_color = $this->guard_html_quote($new_color); - $g_text_decoration = $this->guard_html_quote($new_text_decoration); - $color_clause = ($new_color) ? "color:$g_color;" : ""; - $text_decoration_clause = ($new_text_decoration) ? "text-decoration:$g_text_decoration;" : ""; - $text_decoration_clause =~ s/text-decoration:(border-bottom:)/$1/g; - $args .= " style=\"$color_clause$text_decoration_clause\""; - } - if ($n_spans) { - $markedup_s .= ""; - $n_spans--; - } - $markedup_s .= ""; - $n_spans++; - } - } - $markedup_s .= $chars[$i] if $i <= $#chars; - } - print LOG "Error in html_markup_line $id final no. of open spans: $n_spans\n" if $n_spans && $tokenization_log_verbose; - return $markedup_s; -} - -sub offset_adjustment { - local($this, $g, $s, $offset, $snt_id, *ht, *LOG, $control) = @_; - # s(tring) e.g. "can't" - # g(old string) e.g. "can not" - # Typically when s is a slight variation of g (e.g. with additional tokenization spaces in s) - # returns mapping 0->0, 1->1, 2->2, 3->3, 6->4, 7->5 - - $control = "" unless defined($control); - my $verbose = ($control =~ /\bverbose\b/); - my $s_offset = 0; - my $g_offset = 0; - my @s_chars = $utf8->split_into_utf8_characters($s, "return only chars", *ht); - my @g_chars = $utf8->split_into_utf8_characters($g, "return only chars", *ht); - my $s_len = $#s_chars + 1; - my $g_len = $#g_chars + 1; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{($s_offset+$s_len)} = $g_offset+$g_len; - - while (($s_offset < $s_len) && ($g_offset < $g_len)) { - if ($s_chars[$s_offset] eq $g_chars[$g_offset]) { - $s_offset++; - $g_offset++; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - } else { - my $best_gm = 0; - my $best_sm = 0; - my $best_match_len = 0; - foreach $max_m ((1 .. 4)) { - foreach $sm ((0 .. $max_m)) { - $max_match_len = 0; - while ((($s_index = $s_offset+$sm+$max_match_len) < $s_len) - && (($g_index = $g_offset+$max_m+$max_match_len) < $g_len)) { - if ($s_chars[$s_index] eq $g_chars[$g_index]) { - $max_match_len++; - } else { - last; - } - } - if ($max_match_len > $best_match_len) { - $best_match_len = $max_match_len; - $best_sm = $sm; - $best_gm = $max_m; - } - } - foreach $gm ((0 .. $max_m)) { - $max_match_len = 0; - while ((($s_index = $s_offset+$max_m+$max_match_len) < $s_len) - && (($g_index = $g_offset+$gm+$max_match_len) < $g_len)) { - if ($s_chars[$s_index] eq $g_chars[$g_index]) { - $max_match_len++; - } else { - last; - } - } - if ($max_match_len > $best_match_len) { - $best_match_len = $max_match_len; - $best_sm = $max_m; - $best_gm = $gm; - } - } - } - if ($best_match_len) { - $s_offset += $best_sm; - $g_offset += $best_gm; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - } else { - last; - } - } - } - if ($verbose) { - foreach $s_offset (sort { $a <=> $b } - keys %{$ht{OFFSET_MAP}->{$snt_id}->{$offset}}) { - my $g_offset = $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset}; - print LOG " OFFSET_MAP $snt_id.$offset $s/$g $s_offset -> $g_offset\n" if $tokenization_log_verbose; - } - } -} - -sub length_in_utf8_chars { - local($this, $s) = @_; - - $s =~ s/[\x80-\xBF]//g; - $s =~ s/[\x00-\x7F\xC0-\xFF]/c/g; - return length($s); -} - -sub split_into_utf8_characters { - local($this, $text) = @_; - # "return only chars; return trailing whitespaces" - - @characters = (); - while (($char, $rest) = ($text =~ /^(.[\x80-\xBF]*)(.*)$/)) { - push(@characters, $char); - $text = $rest; - } - return @characters; -} - -sub first_char_of_string { - local($this, $s) = @_; - - $s =~ s/^(.[\x80-\xBF]*).*$/$1/; - return $s; -} - -sub last_char_of_string { - local($this, $s) = @_; - - $s =~ s/^.*([^\x80-\xBF][\x80-\xBF]*)$/$1/; - return $s; -} - -sub first_n_chars_of_string { - local($this, $s, $n) = @_; - - $s =~ s/^((?:.[\x80-\xBF]*){$n,$n}).*$/$1/; - return $s; -} - -sub last_n_chars_of_string { - local($this, $s, $n) = @_; - - $s =~ s/^.*((?:[^\x80-\xBF][\x80-\xBF]*){$n,$n})$/$1/; - return $s; -} - - -1; diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py deleted file mode 100644 index 61617a1739ce196abba1e9a6f9ad9e9f4b37b9c1..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py +++ /dev/null @@ -1,363 +0,0 @@ -import math -import os -import json -import numpy as np -import torch -import torchaudio.compliance.kaldi as kaldi -import yaml -from fairseq import checkpoint_utils, tasks -from fairseq.file_io import PathManager - -try: - from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS - from simuleval.agents import SpeechAgent - from simuleval.states import ListEntry, SpeechStates -except ImportError: - print("Please install simuleval 'pip install simuleval'") - -SHIFT_SIZE = 10 -WINDOW_SIZE = 25 -SAMPLE_RATE = 16000 -FEATURE_DIM = 80 -BOW_PREFIX = "\u2581" - - -class OnlineFeatureExtractor: - """ - Extract speech feature on the fly. - """ - - def __init__(self, args): - self.shift_size = args.shift_size - self.window_size = args.window_size - assert self.window_size >= self.shift_size - - self.sample_rate = args.sample_rate - self.feature_dim = args.feature_dim - self.num_samples_per_shift = int(self.shift_size * self.sample_rate / 1000) - self.num_samples_per_window = int(self.window_size * self.sample_rate / 1000) - self.len_ms_to_samples = lambda x: x * self.sample_rate / 1000 - self.previous_residual_samples = [] - self.global_cmvn = args.global_cmvn - - def clear_cache(self): - self.previous_residual_samples = [] - - def __call__(self, new_samples): - samples = self.previous_residual_samples + new_samples - if len(samples) < self.num_samples_per_window: - self.previous_residual_samples = samples - return - - # num_frames is the number of frames from the new segment - num_frames = math.floor( - (len(samples) - self.len_ms_to_samples(self.window_size - self.shift_size)) - / self.num_samples_per_shift - ) - - # the number of frames used for feature extraction - # including some part of thte previous segment - effective_num_samples = int( - num_frames * self.len_ms_to_samples(self.shift_size) - + self.len_ms_to_samples(self.window_size - self.shift_size) - ) - - input_samples = samples[:effective_num_samples] - self.previous_residual_samples = samples[ - num_frames * self.num_samples_per_shift: - ] - - torch.manual_seed(1) - output = kaldi.fbank( - torch.FloatTensor(input_samples).unsqueeze(0), - num_mel_bins=self.feature_dim, - frame_length=self.window_size, - frame_shift=self.shift_size, - ).numpy() - - output = self.transform(output) - - return torch.from_numpy(output) - - def transform(self, input): - if self.global_cmvn is None: - return input - - mean = self.global_cmvn["mean"] - std = self.global_cmvn["std"] - - x = np.subtract(input, mean) - x = np.divide(x, std) - return x - - -class TensorListEntry(ListEntry): - """ - Data structure to store a list of tensor. - """ - - def append(self, value): - - if len(self.value) == 0: - self.value = value - return - - self.value = torch.cat([self.value] + [value], dim=0) - - def info(self): - return { - "type": str(self.new_value_type), - "length": self.__len__(), - "value": "" if type(self.value) is list else self.value.size(), - } - - -class FairseqSimulSTAgent(SpeechAgent): - - speech_segment_size = 40 # in ms, 4 pooling ratio * 10 ms step size - - def __init__(self, args): - super().__init__(args) - - self.eos = DEFAULT_EOS - - self.gpu = getattr(args, "gpu", False) - - self.args = args - - self.load_model_vocab(args) - - if getattr( - self.model.decoder.layers[0].encoder_attn, - 'pre_decision_ratio', - None - ) is not None: - self.speech_segment_size *= ( - self.model.decoder.layers[0].encoder_attn.pre_decision_ratio - ) - - args.global_cmvn = None - if args.config: - with open(os.path.join(args.data_bin, args.config), "r") as f: - config = yaml.load(f, Loader=yaml.BaseLoader) - - if "global_cmvn" in config: - args.global_cmvn = np.load(config["global_cmvn"]["stats_npz_path"]) - - if args.global_stats: - with PathManager.open(args.global_stats, "r") as f: - global_cmvn = json.loads(f.read()) - self.global_cmvn = {"mean": global_cmvn["mean"], "std": global_cmvn["stddev"]} - - self.feature_extractor = OnlineFeatureExtractor(args) - - self.max_len = args.max_len - - self.force_finish = args.force_finish - - torch.set_grad_enabled(False) - - def build_states(self, args, client, sentence_id): - # Initialize states here, for example add customized entry to states - # This function will be called at beginning of every new sentence - states = SpeechStates(args, client, sentence_id, self) - self.initialize_states(states) - return states - - def to_device(self, tensor): - if self.gpu: - return tensor.cuda() - else: - return tensor.cpu() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--model-path', type=str, required=True, - help='path to your pretrained model.') - parser.add_argument("--data-bin", type=str, required=True, - help="Path of data binary") - parser.add_argument("--config", type=str, default=None, - help="Path to config yaml file") - parser.add_argument("--global-stats", type=str, default=None, - help="Path to json file containing cmvn stats") - parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for target text") - parser.add_argument("--tgt-splitter-path", type=str, default=None, - help="Subword splitter model path for target text") - parser.add_argument("--user-dir", type=str, default="examples/simultaneous_translation", - help="User directory for simultaneous translation") - parser.add_argument("--max-len", type=int, default=200, - help="Max length of translation") - parser.add_argument("--force-finish", default=False, action="store_true", - help="Force the model to finish the hypothsis if the source is not finished") - parser.add_argument("--shift-size", type=int, default=SHIFT_SIZE, - help="Shift size of feature extraction window.") - parser.add_argument("--window-size", type=int, default=WINDOW_SIZE, - help="Window size of feature extraction window.") - parser.add_argument("--sample-rate", type=int, default=SAMPLE_RATE, - help="Sample rate") - parser.add_argument("--feature-dim", type=int, default=FEATURE_DIM, - help="Acoustic feature dimension.") - - # fmt: on - return parser - - def load_model_vocab(self, args): - - filename = args.model_path - if not os.path.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - - state = checkpoint_utils.load_checkpoint_to_cpu(filename) - - task_args = state["cfg"]["task"] - task_args.data = args.data_bin - - if args.config is not None: - task_args.config_yaml = args.config - - task = tasks.setup_task(task_args) - - # build model for ensemble - state["cfg"]["model"].load_pretrained_encoder_from = None - state["cfg"]["model"].load_pretrained_decoder_from = None - self.model = task.build_model(state["cfg"]["model"]) - self.model.load_state_dict(state["model"], strict=True) - self.model.eval() - self.model.share_memory() - - if self.gpu: - self.model.cuda() - - # Set dictionary - self.dict = {} - self.dict["tgt"] = task.target_dictionary - - def initialize_states(self, states): - self.feature_extractor.clear_cache() - states.units.source = TensorListEntry() - states.units.target = ListEntry() - states.incremental_states = dict() - - def segment_to_units(self, segment, states): - # Convert speech samples to features - features = self.feature_extractor(segment) - if features is not None: - return [features] - else: - return [] - - def units_to_segment(self, units, states): - # Merge sub word to full word. - if self.model.decoder.dictionary.eos() == units[0]: - return DEFAULT_EOS - - segment = [] - if None in units.value: - units.value.remove(None) - - for index in units: - if index is None: - units.pop() - token = self.model.decoder.dictionary.string([index]) - if token.startswith(BOW_PREFIX): - if len(segment) == 0: - segment += [token.replace(BOW_PREFIX, "")] - else: - for j in range(len(segment)): - units.pop() - - string_to_return = ["".join(segment)] - - if self.model.decoder.dictionary.eos() == units[0]: - string_to_return += [DEFAULT_EOS] - - return string_to_return - else: - segment += [token.replace(BOW_PREFIX, "")] - - if ( - len(units) > 0 - and self.model.decoder.dictionary.eos() == units[-1] - or len(states.units.target) > self.max_len - ): - tokens = [self.model.decoder.dictionary.string([unit]) for unit in units] - return ["".join(tokens).replace(BOW_PREFIX, "")] + [DEFAULT_EOS] - - return None - - def update_model_encoder(self, states): - if len(states.units.source) == 0: - return - src_indices = self.to_device( - states.units.source.value.unsqueeze(0) - ) - src_lengths = self.to_device( - torch.LongTensor([states.units.source.value.size(0)]) - ) - - states.encoder_states = self.model.encoder(src_indices, src_lengths) - torch.cuda.empty_cache() - - def update_states_read(self, states): - # Happens after a read action. - self.update_model_encoder(states) - - def policy(self, states): - if not getattr(states, "encoder_states", None): - return READ_ACTION - - tgt_indices = self.to_device( - torch.LongTensor( - [self.model.decoder.dictionary.eos()] - + [x for x in states.units.target.value if x is not None] - ).unsqueeze(0) - ) - - states.incremental_states["steps"] = { - "src": states.encoder_states["encoder_out"][0].size(0), - "tgt": 1 + len(states.units.target), - } - - states.incremental_states["online"] = {"only": torch.tensor(not states.finish_read())} - - x, outputs = self.model.decoder.forward( - prev_output_tokens=tgt_indices, - encoder_out=states.encoder_states, - incremental_state=states.incremental_states, - ) - - states.decoder_out = x - - states.decoder_out_extra = outputs - - torch.cuda.empty_cache() - - if outputs.action == 0: - return READ_ACTION - else: - return WRITE_ACTION - - def predict(self, states): - decoder_states = states.decoder_out - - lprobs = self.model.get_normalized_probs( - [decoder_states[:, -1:]], log_probs=True - ) - - index = lprobs.argmax(dim=-1) - - index = index[0, 0].item() - - if ( - self.force_finish - and index == self.model.decoder.dictionary.eos() - and not states.finish_read() - ): - # If we want to force finish the translation - # (don't stop before finish reading), return a None - # self.model.decoder.clear_cache(states.incremental_states) - index = None - - return index diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/model.py b/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/model.py deleted file mode 100644 index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from encoder.params_model import * -from encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, - hidden_size=model_hidden_size, - num_layers=model_num_layers, - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer diff --git a/spaces/mygyasir/digiplay-DreamShaper_8/app.py b/spaces/mygyasir/digiplay-DreamShaper_8/app.py deleted file mode 100644 index 89c278d5182016fc33917ed471178217d5d9ac77..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/digiplay-DreamShaper_8/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr - -interface = gr.Interface.load("models/digiplay/DreamShaper_8", title="Stable Diffusion Image generator", description='Generate Image with AI For Free on Electrosion') -interface.launch() diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py deleted file mode 100644 index 3bcc7953ae57cbb53202876f5ea7838a09ed0baf..0000000000000000000000000000000000000000 --- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py +++ /dev/null @@ -1,371 +0,0 @@ -from pytorch_caney.data.datamodules.mim_datamodule \ - import build_mim_dataloader - -from pytorch_caney.models.mim.mim \ - import build_mim_model - -from pytorch_caney.training.mim_utils \ - import build_optimizer, save_checkpoint - -from pytorch_caney.training.mim_utils import get_grad_norm -from pytorch_caney.lr_scheduler import build_scheduler, setup_scaled_lr -from pytorch_caney.ptc_logging import create_logger -from pytorch_caney.config import get_config - -import argparse -import datetime -import joblib -import numpy as np -import os -import time - -import torch -import torch.cuda.amp as amp -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from timm.utils import AverageMeter - - -def parse_args(): - """ - Parse command-line arguments - """ - parser = argparse.ArgumentParser( - 'pytorch-caney implementation of MiM pre-training script', - add_help=False) - - parser.add_argument( - '--cfg', - type=str, - required=True, - metavar="FILE", - help='path to config file') - - parser.add_argument( - "--data-paths", - nargs='+', - required=True, - help="paths where dataset is stored") - - parser.add_argument( - '--dataset', - type=str, - required=True, - help='Dataset to use') - - parser.add_argument( - '--batch-size', - type=int, - help="batch size for single GPU") - - parser.add_argument( - '--resume', - help='resume from checkpoint') - - parser.add_argument( - '--accumulation-steps', - type=int, - help="gradient accumulation steps") - - parser.add_argument( - '--use-checkpoint', - action='store_true', - help="whether to use gradient checkpointing to save memory") - - parser.add_argument( - '--enable-amp', - action='store_true') - - parser.add_argument( - '--disable-amp', - action='store_false', - dest='enable_amp') - - parser.set_defaults(enable_amp=True) - - parser.add_argument( - '--output', - default='output', - type=str, - metavar='PATH', - help='root of output folder, the full path is ' + - '// (default: output)') - - parser.add_argument( - '--tag', - help='tag of experiment') - - args = parser.parse_args() - - config = get_config(args) - - return args, config - - -def train(config, - dataloader, - model, - model_wo_ddp, - optimizer, - lr_scheduler, - scaler): - """ - Start pre-training a specific model and dataset. - - Args: - config: config object - dataloader: dataloader to use - model: model to pre-train - model_wo_ddp: model to pre-train that is not the DDP version - optimizer: pytorch optimizer - lr_scheduler: learning-rate scheduler - scaler: loss scaler - """ - - logger.info("Start training") - - start_time = time.time() - - for epoch in range(config.TRAIN.START_EPOCH, config.TRAIN.EPOCHS): - - dataloader.sampler.set_epoch(epoch) - - execute_one_epoch(config, model, dataloader, - optimizer, epoch, lr_scheduler, scaler) - - if dist.get_rank() == 0 and \ - (epoch % config.SAVE_FREQ == 0 or - epoch == (config.TRAIN.EPOCHS - 1)): - - save_checkpoint(config, epoch, model_wo_ddp, 0., - optimizer, lr_scheduler, scaler, logger) - - total_time = time.time() - start_time - - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - - logger.info('Training time {}'.format(total_time_str)) - - -def execute_one_epoch(config, - model, - dataloader, - optimizer, - epoch, - lr_scheduler, - scaler): - """ - Execute training iterations on a single epoch. - - Args: - config: config object - model: model to pre-train - dataloader: dataloader to use - optimizer: pytorch optimizer - epoch: int epoch number - lr_scheduler: learning-rate scheduler - scaler: loss scaler - """ - - model.train() - - optimizer.zero_grad() - - num_steps = len(dataloader) - - # Set up logging meters - batch_time = AverageMeter() - data_time = AverageMeter() - loss_meter = AverageMeter() - norm_meter = AverageMeter() - loss_scale_meter = AverageMeter() - - start = time.time() - end = time.time() - for idx, (img, mask, _) in enumerate(dataloader): - - data_time.update(time.time() - start) - - img = img.cuda(non_blocking=True) - mask = mask.cuda(non_blocking=True) - - with amp.autocast(enabled=config.ENABLE_AMP): - loss = model(img, mask) - - if config.TRAIN.ACCUMULATION_STEPS > 1: - loss = loss / config.TRAIN.ACCUMULATION_STEPS - scaler.scale(loss).backward() - loss.backward() - if config.TRAIN.CLIP_GRAD: - scaler.unscale_(optimizer) - grad_norm = torch.nn.utils.clip_grad_norm_( - model.parameters(), - config.TRAIN.CLIP_GRAD) - else: - grad_norm = get_grad_norm(model.parameters()) - if (idx + 1) % config.TRAIN.ACCUMULATION_STEPS == 0: - scaler.step(optimizer) - optimizer.zero_grad() - scaler.update() - lr_scheduler.step_update(epoch * num_steps + idx) - else: - optimizer.zero_grad() - scaler.scale(loss).backward() - if config.TRAIN.CLIP_GRAD: - scaler.unscale_(optimizer) - grad_norm = torch.nn.utils.clip_grad_norm_( - model.parameters(), - config.TRAIN.CLIP_GRAD) - else: - grad_norm = get_grad_norm(model.parameters()) - scaler.step(optimizer) - scaler.update() - lr_scheduler.step_update(epoch * num_steps + idx) - - torch.cuda.synchronize() - - loss_meter.update(loss.item(), img.size(0)) - norm_meter.update(grad_norm) - loss_scale_meter.update(scaler.get_scale()) - batch_time.update(time.time() - end) - end = time.time() - - if idx % config.PRINT_FREQ == 0: - lr = optimizer.param_groups[0]['lr'] - memory_used = torch.cuda.max_memory_allocated() / (1024.0 * 1024.0) - etas = batch_time.avg * (num_steps - idx) - logger.info( - f'Train: [{epoch}/{config.TRAIN.EPOCHS}][{idx}/{num_steps}]\t' - f'eta {datetime.timedelta(seconds=int(etas))} lr {lr:.6f}\t' - f'time {batch_time.val:.4f} ({batch_time.avg:.4f})\t' - f'data_time {data_time.val:.4f} ({data_time.avg:.4f})\t' - f'loss {loss_meter.val:.4f} ({loss_meter.avg:.4f})\t' - f'grad_norm {norm_meter.val:.4f} ({norm_meter.avg:.4f})\t' - f'loss_scale {loss_scale_meter.val:.4f}' + - f' ({loss_scale_meter.avg:.4f})\t' - f'mem {memory_used:.0f}MB') - - epoch_time = time.time() - start - logger.info( - f"EPOCH {epoch} training takes " + - f"{datetime.timedelta(seconds=int(epoch_time))}") - - -def main(config): - """ - Starts training process after building the proper model, optimizer, etc. - - Args: - config: config object - """ - - pretrain_data_loader = build_mim_dataloader(config, logger) - - simmim_model = build_model(config, logger) - - simmim_optimizer = build_optimizer(config, - simmim_model, - is_pretrain=True, - logger=logger) - - model, model_wo_ddp = make_ddp(simmim_model) - - n_iter_per_epoch = len(pretrain_data_loader) - - lr_scheduler = build_scheduler(config, simmim_optimizer, n_iter_per_epoch) - - scaler = amp.GradScaler() - - train(config, - pretrain_data_loader, - model, - model_wo_ddp, - simmim_optimizer, - lr_scheduler, - scaler) - - -def build_model(config, logger): - - logger.info(f"Creating model:{config.MODEL.TYPE}/{config.MODEL.NAME}") - - model = build_mim_model(config) - - model.cuda() - - logger.info(str(model)) - - return model - - -def make_ddp(model): - - model = torch.nn.parallel.DistributedDataParallel( - model, device_ids=[int(os.environ["RANK"])], broadcast_buffers=False) - - model_without_ddp = model.module - - return model, model_without_ddp - - -def setup_rank_worldsize(): - if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ: - rank = int(os.environ["RANK"]) - world_size = int(os.environ['WORLD_SIZE']) - print(f"RANK and WORLD_SIZE in environ: {rank}/{world_size}") - else: - rank = -1 - world_size = -1 - return rank, world_size - - -def setup_distributed_processing(rank, world_size): - torch.cuda.set_device(int(os.environ["RANK"])) - torch.distributed.init_process_group( - backend='nccl', init_method='env://', world_size=world_size, rank=rank) - torch.distributed.barrier() - - -def setup_seeding(config): - seed = config.SEED + dist.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - - -if __name__ == '__main__': - _, config = parse_args() - - rank, world_size = setup_rank_worldsize() - - setup_distributed_processing(rank, world_size) - - setup_seeding(config) - - cudnn.benchmark = True - - linear_scaled_lr, linear_scaled_min_lr, linear_scaled_warmup_lr = \ - setup_scaled_lr(config) - - config.defrost() - config.TRAIN.BASE_LR = linear_scaled_lr - config.TRAIN.WARMUP_LR = linear_scaled_warmup_lr - config.TRAIN.MIN_LR = linear_scaled_min_lr - config.freeze() - - os.makedirs(config.OUTPUT, exist_ok=True) - logger = create_logger(output_dir=config.OUTPUT, - dist_rank=dist.get_rank(), - name=f"{config.MODEL.NAME}") - - if dist.get_rank() == 0: - path = os.path.join(config.OUTPUT, "config.json") - with open(path, "w") as f: - f.write(config.dump()) - logger.info(f"Full config saved to {path}") - logger.info(config.dump()) - config_file_name = f'{config.TAG}.config.sav' - config_file_path = os.path.join(config.OUTPUT, config_file_name) - joblib.dump(config, config_file_path) - - main(config) diff --git a/spaces/naveed92/topic_segmentation/app.py b/spaces/naveed92/topic_segmentation/app.py deleted file mode 100644 index 89c1e6ad8440091630caa1a0360ac1b342e3fb93..0000000000000000000000000000000000000000 --- a/spaces/naveed92/topic_segmentation/app.py +++ /dev/null @@ -1,139 +0,0 @@ -import streamlit as st - -from utils import window, get_depths, get_local_maxima, compute_threshold, get_threshold_segments - -import spacy -nlp = spacy.load('en_core_web_sm') - -def print_list(lst): - for e in lst: - st.markdown("- " + e) - -# Demo start - -st.subheader("Topic Segmentation Demo") - -uploaded_file = st.file_uploader("choose a text file", type=["txt"]) - -if uploaded_file is not None: - st.session_state["text"] = uploaded_file.getvalue().decode('utf-8') - -st.write("OR") - -input_text = st.text_area( - label="Enter text separated by newlines", - value="", - key="text", - height=150 - -) - -button=st.button('Get Segments') - -if (button==True) and input_text != "": - - # Parse sample document and break it into sentences - texts = input_text.split('\n') - sents = [] - for text in texts: - doc = nlp(text) - for sent in doc.sents: - sents.append(sent) - - # Select tokens while ignoring punctuations and stopwords, and lowercase them - MIN_LENGTH = 3 - tokenized_sents = [[token.lemma_.lower() for token in sent if - not token.is_stop and not token.is_punct and token.text.strip() and len(token) >= MIN_LENGTH] - for sent in sents] - - - st.write("building topic model ...") - - # Build gensim dictionary and topic model - from gensim import corpora, models - import numpy as np - - np.random.seed(123) - - N_TOPICS = 5 - N_PASSES = 5 - - dictionary = corpora.Dictionary(tokenized_sents) - bow = [dictionary.doc2bow(sent) for sent in tokenized_sents] - topic_model = models.LdaModel(corpus=bow, id2word=dictionary, num_topics=N_TOPICS, passes=N_PASSES) - - ###st.write(topic_model.show_topics()) - - - st.write("inferring topics ...") - # Infer topics with minimum threshold - THRESHOLD = 0.05 - doc_topics = list(topic_model.get_document_topics(bow, minimum_probability=THRESHOLD)) - - # st.write(doc_topics) - - # get top k topics for each sentence - k = 3 - top_k_topics = [[t[0] for t in sorted(sent_topics, key=lambda x: x[1], reverse=True)][:k] - for sent_topics in doc_topics] - # st.write(top_k_topics) - - ###st.write("apply window") - - from itertools import chain - - WINDOW_SIZE = 3 - - window_topics = window(top_k_topics, n=WINDOW_SIZE) - # assert(len(window_topics) == (len(tokenized_sents) - WINDOW_SIZE + 1)) - window_topics = [list(set(chain.from_iterable(window))) for window in window_topics] - - # Encode topics for similarity computation - - from sklearn.preprocessing import MultiLabelBinarizer - - binarizer = MultiLabelBinarizer(classes=range(N_TOPICS)) - - encoded_topic = binarizer.fit_transform(window_topics) - - # Get similarities - - st.write("generating segments ...") - - from sklearn.metrics.pairwise import cosine_similarity - - sims_topic = [cosine_similarity([pair[0]], [pair[1]])[0][0] for pair in zip(encoded_topic, encoded_topic[1:])] - # plot - - # Compute depth scores - depths_topic = get_depths(sims_topic) - # plot - - # Get local maxima - filtered_topic = get_local_maxima(depths_topic, order=1) - # plot - - ###st.write("compute threshold") - # Automatic threshold computation - # threshold_topic = compute_threshold(depths_topic) - threshold_topic = compute_threshold(filtered_topic) - - # topk_segments = get_topk_segments(filtered_topic, k=5) - # Select segments based on threshold - threshold_segments_topic = get_threshold_segments(filtered_topic, threshold_topic) - - # st.write(threshold_topic) - - ###st.write("compute segments") - - segment_ids = threshold_segments_topic + WINDOW_SIZE - - segment_ids = [0] + segment_ids.tolist() + [len(sents)] - slices = list(zip(segment_ids[:-1], segment_ids[1:])) - - segmented = [sents[s[0]: s[1]] for s in slices] - - for segment in segmented[:-1]: - print_list([s.text for s in segment]) - st.markdown("""---""") - print_list([s.text for s in segmented[-1]]) \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ees Software Free Download Crack 11 [2020].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ees Software Free Download Crack 11 [2020].md deleted file mode 100644 index a0a618ea43cc6a37cda38436465a6fbbbd48b6c9..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ees Software Free Download Crack 11 [2020].md +++ /dev/null @@ -1,46 +0,0 @@ -
        -I can try to help you with that. Here is a possible title and article for your keyword: - -

        EES Software Free Download Crack 11 [2020]: A Powerful Tool for Solving Engineering Equations

        - -

        If you are looking for a software that can solve thousands of coupled non-linear algebraic and differential equations, perform optimization, uncertainty analysis, linear and non-linear regression, unit conversion, and generate publication-quality plots, then you might want to check out EES Software Free Download Crack 11 [2020].

        -

        Ees Software Free Download Crack 11 [2020]


        Download Ziphttps://urlcod.com/2uIcwf



        - -

        EES (Engineering Equation Solver) is a general equation-solving program that was developed by F-Chart Software[^1^]. It can handle complex thermodynamics and heat transfer problems by applying special functions and equations. It also has a built-in library of physical properties and constants for various substances and fluids.

        - -

        EES Software Free Download Crack 11 [2020] is a portable version of EES that does not require installation or license[^2^]. You can run it on any Windows PC and enjoy its full features without any limitation. You can also use Refprop software to calculate the properties of mixtures[^2^].

        - -

        EES Software Free Download Crack 11 [2020] is a useful tool for students, teachers, researchers, and engineers who need to solve engineering equations quickly and accurately. It can save you time and effort by providing you with reliable solutions and graphical outputs.

        - -

        However, before you download EES Software Free Download Crack 11 [2020], you should be aware that it is not an official or legal version of EES. It may contain viruses, malware, or other harmful components that could damage your computer or compromise your data. It may also violate the intellectual property rights of F-Chart Software and expose you to legal risks.

        - -

        Therefore, we recommend that you download EES Software Free Download Crack 11 [2020] at your own risk and discretion. We do not endorse or support any illegal or unethical use of EES software. If you want to use EES software legally and safely, you should purchase a licensed version from F-Chart Software's website[^1^].

        -

        - -

        EES Software Free Download Crack 11 [2020] is a powerful tool for solving engineering equations, but it comes with some drawbacks and dangers. You should weigh the pros and cons before you decide to download it. Alternatively, you can look for other similar software that are free or open source, such as Octave, Scilab, or Mathcad.

        Here are some more paragraphs for your article: - -

        How does EES Software Free Download Crack 11 [2020] work? EES Software Free Download Crack 11 [2020] works by solving a set of equations that you enter in the main window. You can use the equation editor to type in the equations or import them from a text file. You can also use the built-in functions and variables to simplify your input. EES Software Free Download Crack 11 [2020] will then solve the equations numerically and display the results in a table or a plot. You can also use the parametric table to compare different values of variables and see how they affect the results. You can also use the unit system to specify the units of your input and output and use the unit checking option to verify the consistency of your equations.

        - -

        What are the advantages of EES Software Free Download Crack 11 [2020]? EES Software Free Download Crack 11 [2020] has many advantages over other equation-solving software. Some of them are:

        - -
          -
        • It can solve large and complex systems of equations that other software may not be able to handle.
        • -
        • It can handle non-linear, implicit, and transcendental equations that other software may not be able to solve.
        • -
        • It can perform optimization, uncertainty analysis, linear and non-linear regression, and sensitivity analysis on your equations and data.
        • -
        • It can generate high-quality plots and charts that you can customize and export for your reports and presentations.
        • -
        • It has a user-friendly interface that is easy to learn and use.
        • -
        • It has a comprehensive help system that provides examples, tutorials, and references for your convenience.
        • -
        - -

        What are the disadvantages of EES Software Free Download Crack 11 [2020]? EES Software Free Download Crack 11 [2020] also has some disadvantages that you should be aware of. Some of them are:

        - -
          -
        • It is not an official or legal version of EES. It may contain viruses, malware, or other harmful components that could damage your computer or compromise your data. It may also violate the intellectual property rights of F-Chart Software and expose you to legal risks.
        • -
        • It may not be compatible with the latest versions of Windows or EES. It may have bugs, errors, or limitations that could affect its performance or accuracy.
        • -
        • It may not have access to the latest updates, features, or support from F-Chart Software. It may not be able to handle new or advanced problems or functions that are available in the licensed version of EES.
        • -
        • It may not be ethical or professional to use EES Software Free Download Crack 11 [2020] for your academic or work purposes. It may undermine your credibility and reputation as a student, teacher, researcher, or engineer.
        • -
        - -

        In conclusion, EES Software Free Download Crack 11 [2020] is a powerful tool for solving engineering equations, but it comes with some drawbacks and dangers. You should weigh the pros and cons before you decide to download it. Alternatively, you can look for other similar software that are free or open source, such as Octave, Scilab, or Mathcad.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pankh Full Movie Download 720p Kickass Torrent VERIFIED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pankh Full Movie Download 720p Kickass Torrent VERIFIED.md deleted file mode 100644 index 22feba19522d58ef7a611a9b8f0384084fc2e40e..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pankh Full Movie Download 720p Kickass Torrent VERIFIED.md +++ /dev/null @@ -1,35 +0,0 @@ - -

        Pankh Full Movie Download 720p Kickass Torrent: How to Watch Pankh Online for Free

        - -

        Pankh is a 2010 Bollywood drama film that explores the dark and twisted reality of child actors in the Indian film industry. The film stars Bipasha Basu, Lilette Dubey, Mahesh Manjrekar, Ronit Roy, and Maradona Rebello in the lead roles. Pankh is a gripping and disturbing story of a boy who is forced to act as a girl in movies by his ambitious mother.

        - -

        If you are looking for a way to watch Pankh online for free, you might be tempted to use torrent sites like Kickass Torrents. However, downloading movies from torrent sites is illegal and risky. You might end up downloading malware or getting into trouble with the law. Moreover, torrent sites are often blocked by ISPs and authorities due to copyright infringement.

        -

        Pankh Full Movie Download 720p Kickass Torrent


        Download ⇒⇒⇒ https://urlcod.com/2uIc37



        - -

        So, what is the best way to watch Pankh online for free? The answer is to use a legal and safe streaming service that offers Pankh and other Bollywood movies. There are many such services available on the web, but not all of them are reliable and secure. To help you out, we have compiled a list of the best streaming services that offer Pankh and other Bollywood movies for free.

        - -

        The Best Streaming Services to Watch Pankh Online for Free

        - -
          -
        • Hotstar: Hotstar is one of the most popular streaming platforms in India that offers a wide range of content, including movies, TV shows, sports, news, and more. You can watch Pankh online for free on Hotstar with ads. You can also subscribe to Hotstar Premium or Hotstar VIP to access more content and features without ads.
        • -
        • Zee5: Zee5 is another leading streaming service in India that offers a variety of content across genres and languages. You can watch Pankh online for free on Zee5 with ads. You can also subscribe to Zee5 Premium or Zee5 Club to access more content and features without ads.
        • -
        • Mubi: Mubi is a streaming service that specializes in curated films from around the world. You can watch Pankh online for free on Mubi with a 7-day trial. You can also subscribe to Mubi for $10.99 per month or $95.88 per year to access more films and features.
        • -
        • YouTube: YouTube is the largest video-sharing platform on the web that offers millions of videos, including movies, TV shows, music, documentaries, and more. You can watch Pankh online for free on YouTube with ads. You can also rent or buy Pankh on YouTube for a small fee.
        • -
        - -

        These are some of the best streaming services that offer Pankh and other Bollywood movies for free. However, you might face some geo-restrictions or content limitations depending on your location and device. To overcome these issues, you can use a VPN service that can help you access any streaming service from anywhere in the world.

        - -

        How to Use a VPN to Watch Pankh Online for Free

        - -

        A VPN or Virtual Private Network is a software that creates a secure and encrypted connection between your device and a server in another location. By using a VPN, you can change your IP address and spoof your location to access any streaming service from anywhere in the world. A VPN can also protect your online privacy and security by hiding your online activity from your ISP, hackers, and authorities.

        - -

        To use a VPN to watch Pankh online for free, you need to follow these simple steps:

        - -
          -
        1. Choose a reliable and reputable VPN service that offers fast speeds, unlimited bandwidth, multiple servers, and strong encryption. We recommend NordVPN, which is one of the best VPNs on the market.
        2. -
        3. Download and install the VPN app on your device and sign up for an account.
        4. -
        5. Launch the VPN app and connect to a server in the country where your desired streaming service is available.
        6. -
        7. Open

          -

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/panoptic_fpn.py b/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/panoptic_fpn.py deleted file mode 100644 index 88f55d2ce9db62e61445d6a3700067d9d864ecae..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/configs/common/models/panoptic_fpn.py +++ /dev/null @@ -1,20 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling import PanopticFPN -from detectron2.modeling.meta_arch.semantic_seg import SemSegFPNHead - -from .mask_rcnn_fpn import model - -model._target_ = PanopticFPN -model.sem_seg_head = L(SemSegFPNHead)( - input_shape={ - f: L(ShapeSpec)(stride=s, channels="${....backbone.out_channels}") - for f, s in zip(["p2", "p3", "p4", "p5"], [4, 8, 16, 32]) - }, - ignore_value=255, - num_classes=54, # COCO stuff + 1 - conv_dims=128, - common_stride=4, - loss_weight=0.5, - norm="GN", -) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/__init__.py deleted file mode 100644 index f3ee6057e3ec2731984ce8203c6eaf5348d08260..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/nobrowning/M2M/app.py b/spaces/nobrowning/M2M/app.py deleted file mode 100644 index af6e1dc0f67bd6823fd0bab2d2ddaa2ee8c84a95..0000000000000000000000000000000000000000 --- a/spaces/nobrowning/M2M/app.py +++ /dev/null @@ -1,198 +0,0 @@ -import streamlit as st -import os -import io -from transformers import M2M100Tokenizer, M2M100ForConditionalGeneration -from transformers import AutoTokenizer, AutoModelForSequenceClassification -from languages import LANGUANGE_MAP -import time -import json -from typing import List -import torch -import random -import logging - -if torch.cuda.is_available(): - device = torch.device("cuda:0") -else: - device = torch.device("cpu") - logging.warning("GPU not found, using CPU, translation will be very slow.") - -st.cache(suppress_st_warning=True, allow_output_mutation=True) -st.set_page_config(page_title="M2M100 Translator") - -lang_id = { - "Afrikaans": "af", - "Amharic": "am", - "Arabic": "ar", - "Asturian": "ast", - "Azerbaijani": "az", - "Bashkir": "ba", - "Belarusian": "be", - "Bulgarian": "bg", - "Bengali": "bn", - "Breton": "br", - "Bosnian": "bs", - "Catalan": "ca", - "Cebuano": "ceb", - "Czech": "cs", - "Welsh": "cy", - "Danish": "da", - "German": "de", - "Greeek": "el", - "English": "en", - "Spanish": "es", - "Estonian": "et", - "Persian": "fa", - "Fulah": "ff", - "Finnish": "fi", - "French": "fr", - "Western Frisian": "fy", - "Irish": "ga", - "Gaelic": "gd", - "Galician": "gl", - "Gujarati": "gu", - "Hausa": "ha", - "Hebrew": "he", - "Hindi": "hi", - "Croatian": "hr", - "Haitian": "ht", - "Hungarian": "hu", - "Armenian": "hy", - "Indonesian": "id", - "Igbo": "ig", - "Iloko": "ilo", - "Icelandic": "is", - "Italian": "it", - "Japanese": "ja", - "Javanese": "jv", - "Georgian": "ka", - "Kazakh": "kk", - "Central Khmer": "km", - "Kannada": "kn", - "Korean": "ko", - "Luxembourgish": "lb", - "Ganda": "lg", - "Lingala": "ln", - "Lao": "lo", - "Lithuanian": "lt", - "Latvian": "lv", - "Malagasy": "mg", - "Macedonian": "mk", - "Malayalam": "ml", - "Mongolian": "mn", - "Marathi": "mr", - "Malay": "ms", - "Burmese": "my", - "Nepali": "ne", - "Dutch": "nl", - "Norwegian": "no", - "Northern Sotho": "ns", - "Occitan": "oc", - "Oriya": "or", - "Panjabi": "pa", - "Polish": "pl", - "Pushto": "ps", - "Portuguese": "pt", - "Romanian": "ro", - "Russian": "ru", - "Sindhi": "sd", - "Sinhala": "si", - "Slovak": "sk", - "Slovenian": "sl", - "Somali": "so", - "Albanian": "sq", - "Serbian": "sr", - "Swati": "ss", - "Sundanese": "su", - "Swedish": "sv", - "Swahili": "sw", - "Tamil": "ta", - "Thai": "th", - "Tagalog": "tl", - "Tswana": "tn", - "Turkish": "tr", - "Ukrainian": "uk", - "Urdu": "ur", - "Uzbek": "uz", - "Vietnamese": "vi", - "Wolof": "wo", - "Xhosa": "xh", - "Yiddish": "yi", - "Yoruba": "yo", - "Chinese": "zh", - "Zulu": "zu", -} - - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def load_model( - pretrained_model: str = "facebook/m2m100_1.2B", - cache_dir: str = "models/", -): - tokenizer = M2M100Tokenizer.from_pretrained(pretrained_model, cache_dir=cache_dir) - model = M2M100ForConditionalGeneration.from_pretrained( - pretrained_model, cache_dir=cache_dir - ).to(device) - model.eval() - return tokenizer, model - - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def load_detection_model( - pretrained_model: str = "ivanlau/language-detection-fine-tuned-on-xlm-roberta-base", - cache_dir: str = "models/", -): - tokenizer = AutoTokenizer.from_pretrained(pretrained_model, cache_dir=cache_dir) - model = AutoModelForSequenceClassification.from_pretrained(pretrained_model, cache_dir=cache_dir).to(device) - model.eval() - return tokenizer, model - - -st.title("M2M100 Translator") -st.write("M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this paper https://arxiv.org/abs/2010.11125 and first released in https://github.com/pytorch/fairseq/tree/master/examples/m2m_100 repository. The model that can directly translate between the 9,900 directions of 100 languages.\n") - -st.write(" This demo uses the facebook/m2m100_1.2B model. For local inference see https://github.com/ikergarcia1996/Easy-Translate") - - -user_input: str = st.text_area( - "Input text", - height=200, - max_chars=5120, -) - -target_lang = st.selectbox(label="Target language", options=list(lang_id.keys())) - -if st.button("Run"): - time_start = time.time() - tokenizer, model = load_model() - de_tokenizer, de_model = load_detection_model() - - with torch.no_grad(): - - tokenized_sentence = de_tokenizer(user_input, return_tensors='pt') - output = de_model(**tokenized_sentence) - de_predictions = torch.nn.functional.softmax(output.logits, dim=-1) - _, preds = torch.max(de_predictions, dim=-1) - - lang_type = LANGUANGE_MAP[preds.item()] - - if lang_type not in lang_id: - time_end = time.time() - st.success('Unsupported Language') - st.write(f"Computation time: {round((time_end-time_start),3)} segs") - else: - src_lang = lang_id[lang_type] - trg_lang = lang_id[target_lang] - tokenizer.src_lang = src_lang - encoded_input = tokenizer(user_input, return_tensors="pt").to(device) - generated_tokens = model.generate( - **encoded_input, forced_bos_token_id=tokenizer.get_lang_id(trg_lang) - ) - translated_text = tokenizer.batch_decode( - generated_tokens, skip_special_tokens=True - )[0] - - time_end = time.time() - st.success(translated_text) - - st.write(f"Computation time: {round((time_end-time_start),3)} segs") diff --git a/spaces/nunekeerthi1/MyGenAIChatBot/app.py b/spaces/nunekeerthi1/MyGenAIChatBot/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/nunekeerthi1/MyGenAIChatBot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/flow_losses.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/flow_losses.py deleted file mode 100644 index d1a266bdd6d85fcd0aeb6574ee62bda6b6a242b5..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/models/utils/flow_losses.py +++ /dev/null @@ -1,517 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -import numpy as np -from .fbConsistencyCheck import image_warp - - -class FlowWarpingLoss(nn.Module): - def __init__(self, metric): - super(FlowWarpingLoss, self).__init__() - self.metric = metric - - def warp(self, x, flow): - """ - - Args: - x: torch tensor with shape [b, c, h, w], the x can be 3 (for rgb frame) or 2 (for optical flow) - flow: torch tensor with shape [b, 2, h, w] - - Returns: the warped x (can be an image or an optical flow) - - """ - h, w = x.shape[2:] - device = x.device - # normalize the flow to [-1~1] - flow = torch.cat([flow[:, 0:1, :, :] / ((w - 1) / 2), flow[:, 1:2, :, :] / ((h - 1) / 2)], dim=1) - flow = flow.permute(0, 2, 3, 1) # change to [b, h, w, c] - # generate meshgrid - x_idx = np.linspace(-1, 1, w) - y_idx = np.linspace(-1, 1, h) - X_idx, Y_idx = np.meshgrid(x_idx, y_idx) - grid = torch.cat((torch.from_numpy(X_idx.astype('float32')).unsqueeze(0).unsqueeze(3), - torch.from_numpy(Y_idx.astype('float32')).unsqueeze(0).unsqueeze(3)), 3).to(device) - output = torch.nn.functional.grid_sample(x, grid + flow, mode='bilinear', padding_mode='zeros') - return output - - def __call__(self, x, y, flow, mask): - """ - image/flow warping, only support the single image/flow warping - Args: - x: Can be optical flow or image with shape [b, c, h, w], c can be 2 or 3 - y: The ground truth of x (can be the extracted optical flow or image) - flow: The flow used to warp x, whose shape is [b, 2, h, w] - mask: The mask which indicates the hole of x, which must be [b, 1, h, w] - - Returns: the warped image/optical flow - - """ - warped_x = self.warp(x, flow) - loss = self.metric(warped_x * mask, y * mask) - return loss - - -class TVLoss(): - # shift one pixel to get difference ( for both x and y direction) - def __init__(self): - super(TVLoss, self).__init__() - - def __call__(self, x): - loss = torch.mean(torch.abs(x[:, :, :, :-1] - x[:, :, :, 1:])) + torch.mean( - torch.abs(x[:, :, :-1, :] - x[:, :, 1:, :])) - return loss - - -class WarpLoss(nn.Module): - def __init__(self): - super(WarpLoss, self).__init__() - self.metric = nn.L1Loss() - - def forward(self, flow, mask, img1, img2): - """ - - Args: - flow: flow indicates the motion from img1 to img2 - mask: mask corresponds to img1 - img1: frame 1 - img2: frame t+1 - - Returns: warp loss from img2 to img1 - - """ - img2_warped = image_warp(img2, flow) - loss = self.metric(img2_warped * mask, img1 * mask) - return loss - - -class AdversarialLoss(nn.Module): - r""" - Adversarial loss - https://arxiv.org/abs/1711.10337 - """ - - def __init__(self, type='nsgan', target_real_label=1.0, target_fake_label=0.0): - r""" - type = nsgan | lsgan | hinge - """ - super(AdversarialLoss, self).__init__() - - self.type = type - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - - if type == 'nsgan': - self.criterion = nn.BCELoss() - - elif type == 'lsgan': - self.criterion = nn.MSELoss() - - elif type == 'hinge': - self.criterion = nn.ReLU() - - def __call__(self, outputs, is_real, is_disc=None): - if self.type == 'hinge': - if is_disc: - if is_real: - outputs = -outputs - return self.criterion(1 + outputs).mean() - else: - return (-outputs).mean() - - else: - labels = (self.real_label if is_real else self.fake_label).expand_as(outputs) - loss = self.criterion(outputs, labels) - return loss - - -class StyleLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self): - super(StyleLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - - def compute_gram(self, x): - b, ch, h, w = x.size() - f = x.view(b, ch, w * h) - f_T = f.transpose(1, 2) - G = f.bmm(f_T) / (h * w * ch) - - return G - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - # Compute loss - style_loss = 0.0 - style_loss += self.criterion(self.compute_gram(x_vgg['relu2_2']), self.compute_gram(y_vgg['relu2_2'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu3_4']), self.compute_gram(y_vgg['relu3_4'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu4_4']), self.compute_gram(y_vgg['relu4_4'])) - style_loss += self.criterion(self.compute_gram(x_vgg['relu5_2']), self.compute_gram(y_vgg['relu5_2'])) - - return style_loss - - -class PerceptualLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]): - super(PerceptualLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - self.weights = weights - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - content_loss = 0.0 - content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1']) - content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1']) - content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1']) - content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1']) - content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1']) - - return content_loss - - -class VGG19(torch.nn.Module): - def __init__(self): - super(VGG19, self).__init__() - features = models.vgg19(pretrained=True).features - self.relu1_1 = torch.nn.Sequential() - self.relu1_2 = torch.nn.Sequential() - - self.relu2_1 = torch.nn.Sequential() - self.relu2_2 = torch.nn.Sequential() - - self.relu3_1 = torch.nn.Sequential() - self.relu3_2 = torch.nn.Sequential() - self.relu3_3 = torch.nn.Sequential() - self.relu3_4 = torch.nn.Sequential() - - self.relu4_1 = torch.nn.Sequential() - self.relu4_2 = torch.nn.Sequential() - self.relu4_3 = torch.nn.Sequential() - self.relu4_4 = torch.nn.Sequential() - - self.relu5_1 = torch.nn.Sequential() - self.relu5_2 = torch.nn.Sequential() - self.relu5_3 = torch.nn.Sequential() - self.relu5_4 = torch.nn.Sequential() - - for x in range(2): - self.relu1_1.add_module(str(x), features[x]) - - for x in range(2, 4): - self.relu1_2.add_module(str(x), features[x]) - - for x in range(4, 7): - self.relu2_1.add_module(str(x), features[x]) - - for x in range(7, 9): - self.relu2_2.add_module(str(x), features[x]) - - for x in range(9, 12): - self.relu3_1.add_module(str(x), features[x]) - - for x in range(12, 14): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(14, 16): - self.relu3_3.add_module(str(x), features[x]) - - for x in range(16, 18): - self.relu3_4.add_module(str(x), features[x]) - - for x in range(18, 21): - self.relu4_1.add_module(str(x), features[x]) - - for x in range(21, 23): - self.relu4_2.add_module(str(x), features[x]) - - for x in range(23, 25): - self.relu4_3.add_module(str(x), features[x]) - - for x in range(25, 27): - self.relu4_4.add_module(str(x), features[x]) - - for x in range(27, 30): - self.relu5_1.add_module(str(x), features[x]) - - for x in range(30, 32): - self.relu5_2.add_module(str(x), features[x]) - - for x in range(32, 34): - self.relu5_3.add_module(str(x), features[x]) - - for x in range(34, 36): - self.relu5_4.add_module(str(x), features[x]) - - # don't need the gradients, just want the features - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x): - relu1_1 = self.relu1_1(x) - relu1_2 = self.relu1_2(relu1_1) - - relu2_1 = self.relu2_1(relu1_2) - relu2_2 = self.relu2_2(relu2_1) - - relu3_1 = self.relu3_1(relu2_2) - relu3_2 = self.relu3_2(relu3_1) - relu3_3 = self.relu3_3(relu3_2) - relu3_4 = self.relu3_4(relu3_3) - - relu4_1 = self.relu4_1(relu3_4) - relu4_2 = self.relu4_2(relu4_1) - relu4_3 = self.relu4_3(relu4_2) - relu4_4 = self.relu4_4(relu4_3) - - relu5_1 = self.relu5_1(relu4_4) - relu5_2 = self.relu5_2(relu5_1) - relu5_3 = self.relu5_3(relu5_2) - relu5_4 = self.relu5_4(relu5_3) - - out = { - 'relu1_1': relu1_1, - 'relu1_2': relu1_2, - - 'relu2_1': relu2_1, - 'relu2_2': relu2_2, - - 'relu3_1': relu3_1, - 'relu3_2': relu3_2, - 'relu3_3': relu3_3, - 'relu3_4': relu3_4, - - 'relu4_1': relu4_1, - 'relu4_2': relu4_2, - 'relu4_3': relu4_3, - 'relu4_4': relu4_4, - - 'relu5_1': relu5_1, - 'relu5_2': relu5_2, - 'relu5_3': relu5_3, - 'relu5_4': relu5_4, - } - return out - - -# Some losses related to optical flows -# From Unflow: https://github.com/simonmeister/UnFlow -def fbLoss(forward_flow, backward_flow, forward_gt_flow, backward_gt_flow, fb_loss_weight, image_warp_loss_weight=0, - occ_weight=0, beta=255, first_image=None, second_image=None): - """ - calculate the forward-backward consistency loss and the related image warp loss - Args: - forward_flow: torch tensor, with shape [b, c, h, w] - backward_flow: torch tensor, with shape [b, c, h, w] - forward_gt_flow: the ground truth of the forward flow (used for occlusion calculation) - backward_gt_flow: the ground truth of the backward flow (used for occlusion calculation) - fb_loss_weight: loss weight for forward-backward consistency check between two frames - image_warp_loss_weight: loss weight for image warping - occ_weight: loss weight for occlusion area (serve as a punishment for image warp loss) - beta: 255 by default, according to the original loss codes in unflow - first_image: the previous image (extraction for the optical flows) - second_image: the later image (extraction for the optical flows) - Note: forward and backward flow should be extracted from the same image pair - Returns: forward backward consistency loss between forward and backward flow - - """ - mask_fw = create_outgoing_mask(forward_flow).float() - mask_bw = create_outgoing_mask(backward_flow).float() - - # forward warp backward flow and backward forward flow to calculate the cycle consistency - forward_flow_warped = image_warp(forward_flow, backward_gt_flow) - forward_flow_warped_gt = image_warp(forward_gt_flow, backward_gt_flow) - backward_flow_warped = image_warp(backward_flow, forward_gt_flow) - backward_flow_warped_gt = image_warp(backward_gt_flow, forward_gt_flow) - flow_diff_fw = backward_flow_warped + forward_flow - flow_diff_fw_gt = backward_flow_warped_gt + forward_gt_flow - flow_diff_bw = backward_flow + forward_flow_warped - flow_diff_bw_gt = backward_gt_flow + forward_flow_warped_gt - - # occlusion calculation - mag_sq_fw = length_sq(forward_gt_flow) + length_sq(backward_flow_warped_gt) - mag_sq_bw = length_sq(backward_gt_flow) + length_sq(forward_flow_warped_gt) - occ_thresh_fw = 0.01 * mag_sq_fw + 0.5 - occ_thresh_bw = 0.01 * mag_sq_bw + 0.5 - - fb_occ_fw = (length_sq(flow_diff_fw_gt) > occ_thresh_fw).float() - fb_occ_bw = (length_sq(flow_diff_bw_gt) > occ_thresh_bw).float() - - mask_fw *= (1 - fb_occ_fw) - mask_bw *= (1 - fb_occ_bw) - - occ_fw = 1 - mask_fw - occ_bw = 1 - mask_bw - - if image_warp_loss_weight != 0: - # warp images - second_image_warped = image_warp(second_image, forward_flow) # frame 2 -> 1 - first_image_warped = image_warp(first_image, backward_flow) # frame 1 -> 2 - im_diff_fw = first_image - second_image_warped - im_diff_bw = second_image - first_image_warped - # calculate the image warp loss based on the occlusion regions calculated by forward and backward flows (gt) - occ_loss = occ_weight * (charbonnier_loss(occ_fw) + charbonnier_loss(occ_bw)) - image_warp_loss = image_warp_loss_weight * ( - charbonnier_loss(im_diff_fw, mask_fw, beta=beta) + charbonnier_loss(im_diff_bw, mask_bw, - beta=beta)) + occ_loss - else: - image_warp_loss = 0 - fb_loss = fb_loss_weight * (charbonnier_loss(flow_diff_fw, mask_fw) + charbonnier_loss(flow_diff_bw, mask_bw)) - return fb_loss + image_warp_loss - - -def length_sq(x): - return torch.sum(torch.square(x), 1, keepdim=True) - - -def smoothness_loss(flow, cmask): - delta_u, delta_v, mask = smoothness_deltas(flow) - loss_u = charbonnier_loss(delta_u, cmask) - loss_v = charbonnier_loss(delta_v, cmask) - return loss_u + loss_v - - -def smoothness_deltas(flow): - """ - flow: [b, c, h, w] - """ - mask_x = create_mask(flow, [[0, 0], [0, 1]]) - mask_y = create_mask(flow, [[0, 1], [0, 0]]) - mask = torch.cat((mask_x, mask_y), dim=1) - mask = mask.to(flow.device) - filter_x = torch.tensor([[0, 0, 0.], [0, 1, -1], [0, 0, 0]]) - filter_y = torch.tensor([[0, 0, 0.], [0, 1, 0], [0, -1, 0]]) - weights = torch.ones([2, 1, 3, 3]) - weights[0, 0] = filter_x - weights[1, 0] = filter_y - weights = weights.to(flow.device) - - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) - delta_u = F.conv2d(flow_u, weights, stride=1, padding=1) - delta_v = F.conv2d(flow_v, weights, stride=1, padding=1) - return delta_u, delta_v, mask - - -def second_order_loss(flow, cmask): - delta_u, delta_v, mask = second_order_deltas(flow) - loss_u = charbonnier_loss(delta_u, cmask) - loss_v = charbonnier_loss(delta_v, cmask) - return loss_u + loss_v - - -def charbonnier_loss(x, mask=None, truncate=None, alpha=0.45, beta=1.0, epsilon=0.001): - """ - Compute the generalized charbonnier loss of the difference tensor x - All positions where mask == 0 are not taken into account - x: a tensor of shape [b, c, h, w] - mask: a mask of shape [b, mc, h, w], where mask channels must be either 1 or the same as - the number of channels of x. Entries should be 0 or 1 - return: loss - """ - b, c, h, w = x.shape - norm = b * c * h * w - error = torch.pow(torch.square(x * beta) + torch.square(torch.tensor(epsilon)), alpha) - if mask is not None: - error = mask * error - if truncate is not None: - error = torch.min(error, truncate) - return torch.sum(error) / norm - - -def second_order_deltas(flow): - """ - consider the single flow first - flow shape: [b, c, h, w] - """ - # create mask - mask_x = create_mask(flow, [[0, 0], [1, 1]]) - mask_y = create_mask(flow, [[1, 1], [0, 0]]) - mask_diag = create_mask(flow, [[1, 1], [1, 1]]) - mask = torch.cat((mask_x, mask_y, mask_diag, mask_diag), dim=1) - mask = mask.to(flow.device) - - filter_x = torch.tensor([[0, 0, 0.], [1, -2, 1], [0, 0, 0]]) - filter_y = torch.tensor([[0, 1, 0.], [0, -2, 0], [0, 1, 0]]) - filter_diag1 = torch.tensor([[1, 0, 0.], [0, -2, 0], [0, 0, 1]]) - filter_diag2 = torch.tensor([[0, 0, 1.], [0, -2, 0], [1, 0, 0]]) - weights = torch.ones([4, 1, 3, 3]) - weights[0] = filter_x - weights[1] = filter_y - weights[2] = filter_diag1 - weights[3] = filter_diag2 - weights = weights.to(flow.device) - - # split the flow into flow_u and flow_v, conv them with the weights - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) - delta_u = F.conv2d(flow_u, weights, stride=1, padding=1) - delta_v = F.conv2d(flow_v, weights, stride=1, padding=1) - return delta_u, delta_v, mask - - -def create_mask(tensor, paddings): - """ - tensor shape: [b, c, h, w] - paddings: [2 x 2] shape list, the first row indicates up and down paddings - the second row indicates left and right paddings - | | - | x | - | x * x | - | x | - | | - """ - shape = tensor.shape - inner_height = shape[2] - (paddings[0][0] + paddings[0][1]) - inner_width = shape[3] - (paddings[1][0] + paddings[1][1]) - inner = torch.ones([inner_height, inner_width]) - torch_paddings = [paddings[1][0], paddings[1][1], paddings[0][0], paddings[0][1]] # left, right, up and down - mask2d = F.pad(inner, pad=torch_paddings) - mask3d = mask2d.unsqueeze(0).repeat(shape[0], 1, 1) - mask4d = mask3d.unsqueeze(1) - return mask4d.detach() - - -def create_outgoing_mask(flow): - """ - Computes a mask that is zero at all positions where the flow would carry a pixel over the image boundary - For such pixels, it's invalid to calculate the flow losses - Args: - flow: torch tensor: with shape [b, 2, h, w] - - Returns: a mask, 1 indicates in-boundary pixels, with shape [b, 1, h, w] - - """ - b, c, h, w = flow.shape - - grid_x = torch.reshape(torch.arange(0, w), [1, 1, w]) - grid_x = grid_x.repeat(b, h, 1).float() - grid_y = torch.reshape(torch.arange(0, h), [1, h, 1]) - grid_y = grid_y.repeat(b, 1, w).float() - - grid_x = grid_x.to(flow.device) - grid_y = grid_y.to(flow.device) - - flow_u, flow_v = torch.split(flow, split_size_or_sections=1, dim=1) # [b, h, w] - pos_x = grid_x + flow_u - pos_y = grid_y + flow_v - inside_x = torch.logical_and(pos_x <= w - 1, pos_x >= 0) - inside_y = torch.logical_and(pos_y <= h - 1, pos_y >= 0) - inside = torch.logical_and(inside_x, inside_y) - if len(inside.shape) == 3: - inside = inside.unsqueeze(1) - return inside diff --git a/spaces/oguzakif/video-object-remover/SiamMask/datasets/siam_mask_dataset.py b/spaces/oguzakif/video-object-remover/SiamMask/datasets/siam_mask_dataset.py deleted file mode 100644 index d8ccb0fc551a93d038629bcda844543af8f32669..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/datasets/siam_mask_dataset.py +++ /dev/null @@ -1,607 +0,0 @@ -# -------------------------------------------------------- -# SiamMask -# Licensed under The MIT License -# Written by Qiang Wang (wangqiang2015 at ia.ac.cn) -# -------------------------------------------------------- -from __future__ import division -from torch.utils.data import Dataset -import numpy as np -import json -import random -import logging -from os.path import join -from utils.bbox_helper import * -from utils.anchors import Anchors -import math -import sys -pyv = sys.version[0] -import cv2 -if pyv[0] == '3': - cv2.ocl.setUseOpenCL(False) - -logger = logging.getLogger('global') - - -sample_random = random.Random() -sample_random.seed(123456) - - -class SubDataSet(object): - def __init__(self, cfg): - for string in ['root', 'anno']: - if string not in cfg: - raise Exception('SubDataSet need "{}"'.format(string)) - - with open(cfg['anno']) as fin: - logger.info("loading " + cfg['anno']) - self.labels = self.filter_zero(json.load(fin), cfg) - - def isint(x): - try: - int(x) - return True - except: - return False - - # add frames args into labels - to_del = [] - for video in self.labels: - for track in self.labels[video]: - frames = self.labels[video][track] - frames = list(map(int, filter(lambda x: isint(x), frames.keys()))) - frames.sort() - self.labels[video][track]['frames'] = frames - if len(frames) <= 0: - logger.info("warning {}/{} has no frames.".format(video, track)) - to_del.append((video, track)) - - # delete tracks with no frames - for video, track in to_del: - del self.labels[video][track] - - # delete videos with no valid track - to_del = [] - for video in self.labels: - if len(self.labels[video]) <= 0: - logger.info("warning {} has no tracks".format(video)) - to_del.append(video) - - for video in to_del: - del self.labels[video] - - self.videos = list(self.labels.keys()) - - logger.info(cfg['anno'] + " loaded.") - - # default args - self.root = "/" - self.start = 0 - self.num = len(self.labels) - self.num_use = self.num - self.frame_range = 100 - self.mark = "vid" - self.path_format = "{}.{}.{}.jpg" - self.mask_format = "{}.{}.m.png" - - self.pick = [] - - # input args - self.__dict__.update(cfg) - - self.has_mask = self.mark in ['coco', 'ytb_vos'] - - self.num_use = int(self.num_use) - - # shuffle - self.shuffle() - - def filter_zero(self, anno, cfg): - name = cfg.get('mark', '') - - out = {} - tot = 0 - new = 0 - zero = 0 - - for video, tracks in anno.items(): - new_tracks = {} - for trk, frames in tracks.items(): - new_frames = {} - for frm, bbox in frames.items(): - tot += 1 - if len(bbox) == 4: - x1, y1, x2, y2 = bbox - w, h = x2 - x1, y2 - y1 - else: - w, h = bbox - if w == 0 or h == 0: - logger.info('Error, {name} {video} {trk} {bbox}'.format(**locals())) - zero += 1 - continue - new += 1 - new_frames[frm] = bbox - - if len(new_frames) > 0: - new_tracks[trk] = new_frames - - if len(new_tracks) > 0: - out[video] = new_tracks - - return out - - def log(self): - logger.info('SubDataSet {name} start-index {start} select [{select}/{num}] path {format}'.format( - name=self.mark, start=self.start, select=self.num_use, num=self.num, format=self.path_format - )) - - def shuffle(self): - lists = list(range(self.start, self.start + self.num)) - - m = 0 - pick = [] - while m < self.num_use: - sample_random.shuffle(lists) - pick += lists - m += self.num - - self.pick = pick[:self.num_use] - return self.pick - - def get_image_anno(self, video, track, frame): - frame = "{:06d}".format(frame) - image_path = join(self.root, video, self.path_format.format(frame, track, 'x')) - image_anno = self.labels[video][track][frame] - - mask_path = join(self.root, video, self.mask_format.format(frame, track)) - - return image_path, image_anno, mask_path - - def get_positive_pair(self, index): - video_name = self.videos[index] - video = self.labels[video_name] - track = random.choice(list(video.keys())) - track_info = video[track] - - frames = track_info['frames'] - - if 'hard' not in track_info: - template_frame = random.randint(0, len(frames)-1) - - left = max(template_frame - self.frame_range, 0) - right = min(template_frame + self.frame_range, len(frames)-1) + 1 - search_range = frames[left:right] - template_frame = frames[template_frame] - search_frame = random.choice(search_range) - else: - search_frame = random.choice(track_info['hard']) - left = max(search_frame - self.frame_range, 0) - right = min(search_frame + self.frame_range, len(frames)-1) + 1 # python [left:right+1) = [left:right] - template_range = frames[left:right] - template_frame = random.choice(template_range) - search_frame = frames[search_frame] - - return self.get_image_anno(video_name, track, template_frame), \ - self.get_image_anno(video_name, track, search_frame) - - def get_random_target(self, index=-1): - if index == -1: - index = random.randint(0, self.num-1) - video_name = self.videos[index] - video = self.labels[video_name] - track = random.choice(list(video.keys())) - track_info = video[track] - - frames = track_info['frames'] - frame = random.choice(frames) - - return self.get_image_anno(video_name, track, frame) - - -def crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)): - bbox = [float(x) for x in bbox] - a = (out_sz-1) / (bbox[2]-bbox[0]) - b = (out_sz-1) / (bbox[3]-bbox[1]) - c = -a * bbox[0] - d = -b * bbox[1] - mapping = np.array([[a, 0, c], - [0, b, d]]).astype(np.float) - crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding) - return crop - - -class Augmentation: - def __init__(self, cfg): - # default args - self.shift = 0 - self.scale = 0 - self.blur = 0 # False - self.resize = False - self.rgbVar = np.array([[-0.55919361, 0.98062831, - 0.41940627], - [1.72091413, 0.19879334, - 1.82968581], - [4.64467907, 4.73710203, 4.88324118]], dtype=np.float32) - self.flip = 0 - - self.eig_vec = np.array([ - [0.4009, 0.7192, -0.5675], - [-0.8140, -0.0045, -0.5808], - [0.4203, -0.6948, -0.5836], - ], dtype=np.float32) - - self.eig_val = np.array([[0.2175, 0.0188, 0.0045]], np.float32) - - self.__dict__.update(cfg) - - @staticmethod - def random(): - return random.random() * 2 - 1.0 - - def blur_image(self, image): - def rand_kernel(): - size = np.random.randn(1) - size = int(np.round(size)) * 2 + 1 - if size < 0: return None - if random.random() < 0.5: return None - size = min(size, 45) - kernel = np.zeros((size, size)) - c = int(size/2) - wx = random.random() - kernel[:, c] += 1. / size * wx - kernel[c, :] += 1. / size * (1-wx) - return kernel - - kernel = rand_kernel() - - if kernel is not None: - image = cv2.filter2D(image, -1, kernel) - return image - - def __call__(self, image, bbox, size, gray=False, mask=None): - if gray: - grayed = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - image = np.zeros((grayed.shape[0], grayed.shape[1], 3), np.uint8) - image[:, :, 0] = image[:, :, 1] = image[:, :, 2] = grayed - - shape = image.shape - - crop_bbox = center2corner((shape[0]//2, shape[1]//2, size-1, size-1)) - - param = {} - if self.shift: - param['shift'] = (Augmentation.random() * self.shift, Augmentation.random() * self.shift) - - if self.scale: - param['scale'] = ((1.0 + Augmentation.random() * self.scale), (1.0 + Augmentation.random() * self.scale)) - - crop_bbox, _ = aug_apply(Corner(*crop_bbox), param, shape) - - x1 = crop_bbox.x1 - y1 = crop_bbox.y1 - - bbox = BBox(bbox.x1 - x1, bbox.y1 - y1, - bbox.x2 - x1, bbox.y2 - y1) - - if self.scale: - scale_x, scale_y = param['scale'] - bbox = Corner(bbox.x1 / scale_x, bbox.y1 / scale_y, bbox.x2 / scale_x, bbox.y2 / scale_y) - - image = crop_hwc(image, crop_bbox, size) - if not mask is None: - mask = crop_hwc(mask, crop_bbox, size) - - offset = np.dot(self.rgbVar, np.random.randn(3, 1)) - offset = offset[::-1] # bgr 2 rgb - offset = offset.reshape(3) - image = image - offset - - if self.blur > random.random(): - image = self.blur_image(image) - - if self.resize: - imageSize = image.shape[:2] - ratio = max(math.pow(random.random(), 0.5), 0.2) # 25 ~ 255 - rand_size = (int(round(ratio*imageSize[0])), int(round(ratio*imageSize[1]))) - image = cv2.resize(image, rand_size) - image = cv2.resize(image, tuple(imageSize)) - - if self.flip and self.flip > Augmentation.random(): - image = cv2.flip(image, 1) - mask = cv2.flip(mask, 1) - width = image.shape[1] - bbox = Corner(width - 1 - bbox.x2, bbox.y1, width - 1 - bbox.x1, bbox.y2) - - return image, bbox, mask - - -class AnchorTargetLayer: - def __init__(self, cfg): - self.thr_high = 0.6 - self.thr_low = 0.3 - self.negative = 16 - self.rpn_batch = 64 - self.positive = 16 - - self.__dict__.update(cfg) - - def __call__(self, anchor, target, size, neg=False, need_iou=False): - anchor_num = anchor.anchors.shape[0] - - cls = np.zeros((anchor_num, size, size), dtype=np.int64) - cls[...] = -1 # -1 ignore 0 negative 1 positive - delta = np.zeros((4, anchor_num, size, size), dtype=np.float32) - delta_weight = np.zeros((anchor_num, size, size), dtype=np.float32) - - def select(position, keep_num=16): - num = position[0].shape[0] - if num <= keep_num: - return position, num - slt = np.arange(num) - np.random.shuffle(slt) - slt = slt[:keep_num] - return tuple(p[slt] for p in position), keep_num - - if neg: - l = size // 2 - 3 - r = size // 2 + 3 + 1 - - cls[:, l:r, l:r] = 0 - - neg, neg_num = select(np.where(cls == 0), self.negative) - cls[:] = -1 - cls[neg] = 0 - - if not need_iou: - return cls, delta, delta_weight - else: - overlap = np.zeros((anchor_num, size, size), dtype=np.float32) - return cls, delta, delta_weight, overlap - - tcx, tcy, tw, th = corner2center(target) - - anchor_box = anchor.all_anchors[0] - anchor_center = anchor.all_anchors[1] - x1, y1, x2, y2 = anchor_box[0], anchor_box[1], anchor_box[2], anchor_box[3] - cx, cy, w, h = anchor_center[0], anchor_center[1], anchor_center[2], anchor_center[3] - - # delta - delta[0] = (tcx - cx) / w - delta[1] = (tcy - cy) / h - delta[2] = np.log(tw / w) - delta[3] = np.log(th / h) - - # IoU - overlap = IoU([x1, y1, x2, y2], target) - - pos = np.where(overlap > self.thr_high) - neg = np.where(overlap < self.thr_low) - - pos, pos_num = select(pos, self.positive) - neg, neg_num = select(neg, self.rpn_batch - pos_num) - - cls[pos] = 1 - delta_weight[pos] = 1. / (pos_num + 1e-6) - - cls[neg] = 0 - - if not need_iou: - return cls, delta, delta_weight - else: - return cls, delta, delta_weight, overlap - - -class DataSets(Dataset): - def __init__(self, cfg, anchor_cfg, num_epoch=1): - super(DataSets, self).__init__() - global logger - logger = logging.getLogger('global') - - # anchors - self.anchors = Anchors(anchor_cfg) - - # size - self.template_size = 127 - self.origin_size = 127 - self.search_size = 255 - self.size = 17 - self.base_size = 0 - self.crop_size = 0 - - if 'template_size' in cfg: - self.template_size = cfg['template_size'] - if 'origin_size' in cfg: - self.origin_size = cfg['origin_size'] - if 'search_size' in cfg: - self.search_size = cfg['search_size'] - if 'base_size' in cfg: - self.base_size = cfg['base_size'] - if 'size' in cfg: - self.size = cfg['size'] - - if (self.search_size - self.template_size) / self.anchors.stride + 1 + self.base_size != self.size: - raise Exception("size not match!") # TODO: calculate size online - if 'crop_size' in cfg: - self.crop_size = cfg['crop_size'] - self.template_small = False - if 'template_small' in cfg and cfg['template_small']: - self.template_small = True - - self.anchors.generate_all_anchors(im_c=self.search_size//2, size=self.size) - - if 'anchor_target' not in cfg: - cfg['anchor_target'] = {} - self.anchor_target = AnchorTargetLayer(cfg['anchor_target']) - - # data sets - if 'datasets' not in cfg: - raise(Exception('DataSet need "{}"'.format('datasets'))) - - self.all_data = [] - start = 0 - self.num = 0 - for name in cfg['datasets']: - dataset = cfg['datasets'][name] - dataset['mark'] = name - dataset['start'] = start - - dataset = SubDataSet(dataset) - dataset.log() - self.all_data.append(dataset) - - start += dataset.num # real video number - self.num += dataset.num_use # the number used for subset shuffle - - # data augmentation - aug_cfg = cfg['augmentation'] - self.template_aug = Augmentation(aug_cfg['template']) - self.search_aug = Augmentation(aug_cfg['search']) - self.gray = aug_cfg['gray'] - self.neg = aug_cfg['neg'] - self.inner_neg = 0 if 'inner_neg' not in aug_cfg else aug_cfg['inner_neg'] - - self.pick = None # list to save id for each img - if 'num' in cfg: # number used in training for all dataset - self.num = int(cfg['num']) - self.num *= num_epoch - self.shuffle() - - self.infos = { - 'template': self.template_size, - 'search': self.search_size, - 'template_small': self.template_small, - 'gray': self.gray, - 'neg': self.neg, - 'inner_neg': self.inner_neg, - 'crop_size': self.crop_size, - 'anchor_target': self.anchor_target.__dict__, - 'num': self.num // num_epoch - } - logger.info('dataset informations: \n{}'.format(json.dumps(self.infos, indent=4))) - - def imread(self, path): - img = cv2.imread(path) - - if self.origin_size == self.template_size: - return img, 1.0 - - def map_size(exe, size): - return int(round(((exe + 1) / (self.origin_size + 1) * (size+1) - 1))) - - nsize = map_size(self.template_size, img.shape[1]) - - img = cv2.resize(img, (nsize, nsize)) - - return img, nsize / img.shape[1] - - def shuffle(self): - pick = [] - m = 0 - while m < self.num: - p = [] - for subset in self.all_data: - sub_p = subset.shuffle() - p += sub_p - - sample_random.shuffle(p) - - pick += p - m = len(pick) - self.pick = pick - logger.info("shuffle done!") - logger.info("dataset length {}".format(self.num)) - - def __len__(self): - return self.num - - def find_dataset(self, index): - for dataset in self.all_data: - if dataset.start + dataset.num > index: - return dataset, index - dataset.start - - def __getitem__(self, index, debug=False): - index = self.pick[index] - dataset, index = self.find_dataset(index) - - gray = self.gray and self.gray > random.random() - neg = self.neg and self.neg > random.random() - - if neg: - template = dataset.get_random_target(index) - if self.inner_neg and self.inner_neg > random.random(): - search = dataset.get_random_target() - else: - search = random.choice(self.all_data).get_random_target() - else: - template, search = dataset.get_positive_pair(index) - - def center_crop(img, size): - shape = img.shape[1] - if shape == size: return img - c = shape // 2 - l = c - size // 2 - r = c + size // 2 + 1 - return img[l:r, l:r] - - template_image, scale_z = self.imread(template[0]) - - if self.template_small: - template_image = center_crop(template_image, self.template_size) - - search_image, scale_x = self.imread(search[0]) - - if dataset.has_mask and not neg: - search_mask = (cv2.imread(search[2], 0) > 0).astype(np.float32) - else: - search_mask = np.zeros(search_image.shape[:2], dtype=np.float32) - - if self.crop_size > 0: - search_image = center_crop(search_image, self.crop_size) - search_mask = center_crop(search_mask, self.crop_size) - - def toBBox(image, shape): - imh, imw = image.shape[:2] - if len(shape) == 4: - w, h = shape[2]-shape[0], shape[3]-shape[1] - else: - w, h = shape - context_amount = 0.5 - exemplar_size = self.template_size # 127 - wc_z = w + context_amount * (w+h) - hc_z = h + context_amount * (w+h) - s_z = np.sqrt(wc_z * hc_z) - scale_z = exemplar_size / s_z - w = w*scale_z - h = h*scale_z - cx, cy = imw//2, imh//2 - bbox = center2corner(Center(cx, cy, w, h)) - return bbox - - template_box = toBBox(template_image, template[1]) - search_box = toBBox(search_image, search[1]) - - template, _, _ = self.template_aug(template_image, template_box, self.template_size, gray=gray) - search, bbox, mask = self.search_aug(search_image, search_box, self.search_size, gray=gray, mask=search_mask) - - def draw(image, box, name): - image = image.copy() - x1, y1, x2, y2 = map(lambda x: int(round(x)), box) - cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0)) - cv2.imwrite(name, image) - - if debug: - draw(template_image, template_box, "debug/{:06d}_ot.jpg".format(index)) - draw(search_image, search_box, "debug/{:06d}_os.jpg".format(index)) - draw(template, _, "debug/{:06d}_t.jpg".format(index)) - draw(search, bbox, "debug/{:06d}_s.jpg".format(index)) - - cls, delta, delta_weight = self.anchor_target(self.anchors, bbox, self.size, neg) - if dataset.has_mask and not neg: - mask_weight = cls.max(axis=0, keepdims=True) - else: - mask_weight = np.zeros([1, cls.shape[1], cls.shape[2]], dtype=np.float32) - - template, search = map(lambda x: np.transpose(x, (2, 0, 1)).astype(np.float32), [template, search]) - - mask = (np.expand_dims(mask, axis=0) > 0.5) * 2 - 1 # 1*H*W - - return template, search, cls, delta, delta_weight, np.array(bbox, np.float32), \ - np.array(mask, np.float32), np.array(mask_weight, np.float32) - diff --git a/spaces/oguzakif/video-object-remover/SiamMask/utils/__init__.py b/spaces/oguzakif/video-object-remover/SiamMask/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/oluyemitosin/Honda_or_Mercedes/app.py b/spaces/oluyemitosin/Honda_or_Mercedes/app.py deleted file mode 100644 index 46ce74f07d5807aee00156fbd27c7cda5847fdfb..0000000000000000000000000000000000000000 --- a/spaces/oluyemitosin/Honda_or_Mercedes/app.py +++ /dev/null @@ -1,54 +0,0 @@ -# -*- coding: utf-8 -*- -"""Car_Identifier_Voila.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1_SwRA0VUkga3zoCsnBIEQv2ExiM3D9H0 -""" - - - -from fastai.vision import * -from fastbook import * -import gradio as gr - -"""# What maker is this car? -Ahmed is a secondary school student in Lagos. He needs to get a side job to support his single mom and little sister. He got a job as a survey assistant. His first task is to find out if some set of pictures contain Mercedes or Honda cars. Cool, right? - - -But Ahmed can barely identify all the hundreds of models these car giants have released over the years. He definitely needs someone who can identify any Honda or Mercedes from any angle or he's going to be turning in unreliable reports to his supervisor. And NO!, he can't share his wage with anyone. - - -What's the way out? - - -## THE MERCEDES-HONDA IDENTIFIER! - -Ahmed got an intelligent friend who is more efficient at knowing Mercedes and Honda cars, no matter their posture. Not only that, his new friend doesn't need money. - -Test Ahmed's new intelligent friend and see how intelligent his friend is! -""" - - - -learner_inf = load_learner('car_model_identifier.pkl') - - - -def classify_image(file): -# image = PILImage.create(file) - categories = ('Honda', 'Mercedes') - pred, pred_idx, probs = learner_inf.predict(file) - result = dict(zip(categories, map(float,probs))) - return result - -app = gr.Interface( - fn = classify_image, - inputs = gr.Image(), - outputs= gr.Label(), - examples = ['download (1).png', 'download (2).png', 'download (4).png', 'images (41).png', 'images - 2022-12-07T052510.177.png', 'images - 2022-12-07T052511.014.png', 'images - 2022-12-07T053716.324.png', 'images - 2022-12-07T053818.478.png', 'images - 2022-12-07T053820.398.png'] -) - -app.launch() - diff --git a/spaces/open-spaced-repetition/fsrs4anki_simulator/README.md b/spaces/open-spaced-repetition/fsrs4anki_simulator/README.md deleted file mode 100644 index b804cbf2414eceb1f4d98e4b2f815cf4b18ed392..0000000000000000000000000000000000000000 --- a/spaces/open-spaced-repetition/fsrs4anki_simulator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fsrs4anki Simulator -emoji: 🐠 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.41.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/oppappi/wd-v1-4-tags/app.py b/spaces/oppappi/wd-v1-4-tags/app.py deleted file mode 100644 index 33fa06d229e5bdad6268136cc0fb55c64909cbfd..0000000000000000000000000000000000000000 --- a/spaces/oppappi/wd-v1-4-tags/app.py +++ /dev/null @@ -1,285 +0,0 @@ -from __future__ import annotations - -import argparse -import functools -import html -import os - -import gradio as gr -import huggingface_hub -import numpy as np -import onnxruntime as rt -import pandas as pd -import piexif -import piexif.helper -import PIL.Image - -from Utils import dbimutils - -TITLE = "WaifuDiffusion v1.4 Tags" -DESCRIPTION = """ -Demo for: -- [SmilingWolf/wd-v1-4-moat-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-moat-tagger-v2) -- [SmilingWolf/wd-v1-4-swinv2-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2) -- [SmilingWolf/wd-v1-4-convnext-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2) -- [SmilingWolf/wd-v1-4-convnextv2-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-convnextv2-tagger-v2) -- [SmilingWolf/wd-v1-4-vit-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2) - -Includes "ready to copy" prompt and a prompt analyzer. - -Modified from [NoCrypt/DeepDanbooru_string](https://huggingface.co/spaces/NoCrypt/DeepDanbooru_string) -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -Example image by [ほし☆☆☆](https://www.pixiv.net/en/users/43565085) -""" - -HF_TOKEN = os.environ["HF_TOKEN"] -MOAT_MODEL_REPO = "SmilingWolf/wd-v1-4-moat-tagger-v2" -SWIN_MODEL_REPO = "SmilingWolf/wd-v1-4-swinv2-tagger-v2" -CONV_MODEL_REPO = "SmilingWolf/wd-v1-4-convnext-tagger-v2" -CONV2_MODEL_REPO = "SmilingWolf/wd-v1-4-convnextv2-tagger-v2" -VIT_MODEL_REPO = "SmilingWolf/wd-v1-4-vit-tagger-v2" -MODEL_FILENAME = "model.onnx" -LABEL_FILENAME = "selected_tags.csv" - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument("--score-slider-step", type=float, default=0.05) - parser.add_argument("--score-general-threshold", type=float, default=0.35) - parser.add_argument("--score-character-threshold", type=float, default=0.85) - parser.add_argument("--share", action="store_true") - return parser.parse_args() - - -def load_model(model_repo: str, model_filename: str) -> rt.InferenceSession: - path = huggingface_hub.hf_hub_download( - model_repo, model_filename, use_auth_token=HF_TOKEN - ) - model = rt.InferenceSession(path) - return model - - -def change_model(model_name): - global loaded_models - - if model_name == "MOAT": - model = load_model(MOAT_MODEL_REPO, MODEL_FILENAME) - elif model_name == "SwinV2": - model = load_model(SWIN_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ConvNext": - model = load_model(CONV_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ConvNextV2": - model = load_model(CONV2_MODEL_REPO, MODEL_FILENAME) - elif model_name == "ViT": - model = load_model(VIT_MODEL_REPO, MODEL_FILENAME) - - loaded_models[model_name] = model - return loaded_models[model_name] - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download( - MOAT_MODEL_REPO, LABEL_FILENAME, use_auth_token=HF_TOKEN - ) - df = pd.read_csv(path) - - tag_names = df["name"].tolist() - rating_indexes = list(np.where(df["category"] == 9)[0]) - general_indexes = list(np.where(df["category"] == 0)[0]) - character_indexes = list(np.where(df["category"] == 4)[0]) - return tag_names, rating_indexes, general_indexes, character_indexes - - -def plaintext_to_html(text): - text = ( - "

          " + "
          \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "

          " - ) - return text - - -def predict( - image: PIL.Image.Image, - model_name: str, - general_threshold: float, - character_threshold: float, - tag_names: list[str], - rating_indexes: list[np.int64], - general_indexes: list[np.int64], - character_indexes: list[np.int64], -): - global loaded_models - - rawimage = image - - model = loaded_models[model_name] - if model is None: - model = change_model(model_name) - - _, height, width, _ = model.get_inputs()[0].shape - - # Alpha to white - image = image.convert("RGBA") - new_image = PIL.Image.new("RGBA", image.size, "WHITE") - new_image.paste(image, mask=image) - image = new_image.convert("RGB") - image = np.asarray(image) - - # PIL RGB to OpenCV BGR - image = image[:, :, ::-1] - - image = dbimutils.make_square(image, height) - image = dbimutils.smart_resize(image, height) - image = image.astype(np.float32) - image = np.expand_dims(image, 0) - - input_name = model.get_inputs()[0].name - label_name = model.get_outputs()[0].name - probs = model.run([label_name], {input_name: image})[0] - - labels = list(zip(tag_names, probs[0].astype(float))) - - # First 4 labels are actually ratings: pick one with argmax - ratings_names = [labels[i] for i in rating_indexes] - rating = dict(ratings_names) - - # Then we have general tags: pick any where prediction confidence > threshold - general_names = [labels[i] for i in general_indexes] - general_res = [x for x in general_names if x[1] > general_threshold] - general_res = dict(general_res) - - # Everything else is characters: pick any where prediction confidence > threshold - character_names = [labels[i] for i in character_indexes] - character_res = [x for x in character_names if x[1] > character_threshold] - character_res = dict(character_res) - - b = dict(sorted(general_res.items(), key=lambda item: item[1], reverse=True)) - a = ( - ", ".join(list(b.keys())) - .replace("_", " ") - .replace("(", "\(") - .replace(")", "\)") - ) - c = ", ".join(list(b.keys())) - - items = rawimage.info - geninfo = "" - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b"") - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode("utf8", errors="ignore") - - items["exif comment"] = exif_comment - geninfo = exif_comment - - for field in [ - "jfif", - "jfif_version", - "jfif_unit", - "jfif_density", - "dpi", - "exif", - "loop", - "background", - "timestamp", - "duration", - ]: - items.pop(field, None) - - geninfo = items.get("parameters", geninfo) - - info = f""" -

          PNG Info

          -""" - for key, text in items.items(): - info += ( - f""" -
          -

          {plaintext_to_html(str(key))}

          -

          {plaintext_to_html(str(text))}

          -
          -""".strip() - + "\n" - ) - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

          {message}

          " - - return (a, c, rating, character_res, general_res, info) - - -def main(): - global loaded_models - loaded_models = { - "MOAT": None, - "SwinV2": None, - "ConvNext": None, - "ConvNextV2": None, - "ViT": None, - } - - args = parse_args() - - change_model("MOAT") - - tag_names, rating_indexes, general_indexes, character_indexes = load_labels() - - func = functools.partial( - predict, - tag_names=tag_names, - rating_indexes=rating_indexes, - general_indexes=general_indexes, - character_indexes=character_indexes, - ) - - gr.Interface( - fn=func, - inputs=[ - gr.Image(type="pil", label="Input"), - gr.Radio( - ["MOAT", "SwinV2", "ConvNext", "ConvNextV2", "ViT"], - value="MOAT", - label="Model", - ), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_general_threshold, - label="General Tags Threshold", - ), - gr.Slider( - 0, - 1, - step=args.score_slider_step, - value=args.score_character_threshold, - label="Character Tags Threshold", - ), - ], - outputs=[ - gr.Textbox(label="Output (string)"), - gr.Textbox(label="Output (raw string)"), - gr.Label(label="Rating"), - gr.Label(label="Output (characters)"), - gr.Label(label="Output (tags)"), - gr.HTML(), - ], - examples=[["power.jpg", "MOAT", 0.35, 0.85]], - title=TITLE, - description=DESCRIPTION, - allow_flagging="never", - ).launch( - enable_queue=True, - share=args.share, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/owaiskha9654/Custom_Yolov7/utils/__init__.py b/spaces/owaiskha9654/Custom_Yolov7/utils/__init__.py deleted file mode 100644 index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Custom_Yolov7/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# init \ No newline at end of file diff --git a/spaces/owaiskha9654/Custom_Yolov7/utils/autoanchor.py b/spaces/owaiskha9654/Custom_Yolov7/utils/autoanchor.py deleted file mode 100644 index f491032e53ab43cd81d966d127bd92f9b414b9fe..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Custom_Yolov7/utils/autoanchor.py +++ /dev/null @@ -1,160 +0,0 @@ -# Auto-anchor utils - -import numpy as np -import torch -import yaml -from scipy.cluster.vq import kmeans -from tqdm import tqdm - -from utils.general import colorstr - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print('Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) - - -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - prefix = colorstr('autoanchor: ') - print(f'\n{prefix}Analyzing anchors... ', end='') - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1. / thr).float().mean() # best possible recall - return bpr, aat - - anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors - bpr, aat = metric(anchors) - print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='') - if bpr < 0.98: # threshold to recompute - print('. Attempting to improve anchors, please wait...') - na = m.anchor_grid.numel() // 2 # number of anchors - try: - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - except Exception as e: - print(f'{prefix}ERROR: {e}') - new_bpr = metric(anchors)[0] - if new_bpr > bpr: # replace anchors - anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference - check_anchor_order(m) - m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss - print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.') - else: - print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.') - print('') # newline - - -def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - path: path to dataset *.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - thr = 1. / thr - prefix = colorstr('autoanchor: ') - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr') - print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' - f'past_thr={x[x > thr].mean():.3f}-mean: ', end='') - for i, x in enumerate(k): - print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg - return k - - if isinstance(path, str): # *.yaml file - with open(path) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict - from utils.datasets import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - else: - dataset = path # dataset - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') - wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels - # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans calculation - print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...') - s = wh.std(0) # sigmas for whitening - k, dist = kmeans(wh / s, n, iter=30) # points, mean distance - assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}') - k *= s - wh = torch.tensor(wh, dtype=torch.float32) # filtered - wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered - k = print_results(k) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - npr = np.random - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k) - - return print_results(k) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_flax.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_flax.py deleted file mode 100644 index 4ac4f969ee69658e91341ac01756ae71c643f262..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/dreambooth/train_dreambooth_flax.py +++ /dev/null @@ -1,709 +0,0 @@ -import argparse -import hashlib -import logging -import math -import os -from pathlib import Path -from typing import Optional - -import jax -import jax.numpy as jnp -import numpy as np -import optax -import torch -import torch.utils.checkpoint -import transformers -from flax import jax_utils -from flax.training import train_state -from flax.training.common_utils import shard -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from jax.experimental.compilation_cache import compilation_cache as cc -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed - -from diffusers import ( - FlaxAutoencoderKL, - FlaxDDPMScheduler, - FlaxPNDMScheduler, - FlaxStableDiffusionPipeline, - FlaxUNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.22.0.dev0") - -# Cache compiled models across invocations of this script. -cc.initialize_cache(os.path.expanduser("~/.cache/jax/compilation_cache")) - -logger = logging.getLogger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_vae_name_or_path", - type=str, - default=None, - help="Path to pretrained vae or vae identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--save_steps", type=int, default=None, help="Save a checkpoint every X steps.") - parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.instance_data_dir is None: - raise ValueError("You must specify a train data directory.") - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - class_num=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - if class_num is not None: - self.num_class_images = min(len(self.class_images_path), class_num) - else: - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def get_params_to_save(params): - return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params)) - - -def main(): - args = parse_args() - - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - transformers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - - if args.seed is not None: - set_seed(args.seed) - - rng = jax.random.PRNGKey(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, safety_checker=None, revision=args.revision - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - total_sample_batch_size = args.sample_batch_size * jax.local_device_count() - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=total_sample_batch_size) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not jax.process_index() == 0 - ): - prompt_ids = pipeline.prepare_inputs(example["prompt"]) - prompt_ids = shard(prompt_ids) - p_params = jax_utils.replicate(params) - rng = jax.random.split(rng)[0] - sample_rng = jax.random.split(rng, jax.device_count()) - images = pipeline(prompt_ids, p_params, sample_rng, jit=True).images - images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) - images = pipeline.numpy_to_pil(np.array(images)) - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - - # Handle the repository creation - if jax.process_index() == 0: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer and add the placeholder token as a additional special token - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision - ) - else: - raise NotImplementedError("No tokenizer specified!") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - class_num=args.num_class_images, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad( - {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt" - ).input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - batch = {k: v.numpy() for k, v in batch.items()} - return batch - - total_train_batch_size = args.train_batch_size * jax.local_device_count() - if len(train_dataset) < total_train_batch_size: - raise ValueError( - f"Training batch size is {total_train_batch_size}, but your dataset only contains" - f" {len(train_dataset)} images. Please, use a larger dataset or reduce the effective batch size. Note that" - f" there are {jax.local_device_count()} parallel devices, so your batch size can't be smaller than that." - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=total_train_batch_size, shuffle=True, collate_fn=collate_fn, drop_last=True - ) - - weight_dtype = jnp.float32 - if args.mixed_precision == "fp16": - weight_dtype = jnp.float16 - elif args.mixed_precision == "bf16": - weight_dtype = jnp.bfloat16 - - if args.pretrained_vae_name_or_path: - # TODO(patil-suraj): Upload flax weights for the VAE - vae_arg, vae_kwargs = (args.pretrained_vae_name_or_path, {"from_pt": True}) - else: - vae_arg, vae_kwargs = (args.pretrained_model_name_or_path, {"subfolder": "vae", "revision": args.revision}) - - # Load models and create wrapper for stable diffusion - text_encoder = FlaxCLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", dtype=weight_dtype, revision=args.revision - ) - vae, vae_params = FlaxAutoencoderKL.from_pretrained( - vae_arg, - dtype=weight_dtype, - **vae_kwargs, - ) - unet, unet_params = FlaxUNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", dtype=weight_dtype, revision=args.revision - ) - - # Optimization - if args.scale_lr: - args.learning_rate = args.learning_rate * total_train_batch_size - - constant_scheduler = optax.constant_schedule(args.learning_rate) - - adamw = optax.adamw( - learning_rate=constant_scheduler, - b1=args.adam_beta1, - b2=args.adam_beta2, - eps=args.adam_epsilon, - weight_decay=args.adam_weight_decay, - ) - - optimizer = optax.chain( - optax.clip_by_global_norm(args.max_grad_norm), - adamw, - ) - - unet_state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer) - text_encoder_state = train_state.TrainState.create( - apply_fn=text_encoder.__call__, params=text_encoder.params, tx=optimizer - ) - - noise_scheduler = FlaxDDPMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000 - ) - noise_scheduler_state = noise_scheduler.create_state() - - # Initialize our training - train_rngs = jax.random.split(rng, jax.local_device_count()) - - def train_step(unet_state, text_encoder_state, vae_params, batch, train_rng): - dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3) - - if args.train_text_encoder: - params = {"text_encoder": text_encoder_state.params, "unet": unet_state.params} - else: - params = {"unet": unet_state.params} - - def compute_loss(params): - # Convert images to latent space - vae_outputs = vae.apply( - {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode - ) - latents = vae_outputs.latent_dist.sample(sample_rng) - # (NHWC) -> (NCHW) - latents = jnp.transpose(latents, (0, 3, 1, 2)) - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise_rng, timestep_rng = jax.random.split(sample_rng) - noise = jax.random.normal(noise_rng, latents.shape) - # Sample a random timestep for each image - bsz = latents.shape[0] - timesteps = jax.random.randint( - timestep_rng, - (bsz,), - 0, - noise_scheduler.config.num_train_timesteps, - ) - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps) - - # Get the text embedding for conditioning - if args.train_text_encoder: - encoder_hidden_states = text_encoder_state.apply_fn( - batch["input_ids"], params=params["text_encoder"], dropout_rng=dropout_rng, train=True - )[0] - else: - encoder_hidden_states = text_encoder( - batch["input_ids"], params=text_encoder_state.params, train=False - )[0] - - # Predict the noise residual - model_pred = unet.apply( - {"params": params["unet"]}, noisy_latents, timesteps, encoder_hidden_states, train=True - ).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and noise_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = jnp.split(model_pred, 2, axis=0) - target, target_prior = jnp.split(target, 2, axis=0) - - # Compute instance loss - loss = (target - model_pred) ** 2 - loss = loss.mean() - - # Compute prior loss - prior_loss = (target_prior - model_pred_prior) ** 2 - prior_loss = prior_loss.mean() - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = (target - model_pred) ** 2 - loss = loss.mean() - - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grad = grad_fn(params) - grad = jax.lax.pmean(grad, "batch") - - new_unet_state = unet_state.apply_gradients(grads=grad["unet"]) - if args.train_text_encoder: - new_text_encoder_state = text_encoder_state.apply_gradients(grads=grad["text_encoder"]) - else: - new_text_encoder_state = text_encoder_state - - metrics = {"loss": loss} - metrics = jax.lax.pmean(metrics, axis_name="batch") - - return new_unet_state, new_text_encoder_state, metrics, new_train_rng - - # Create parallel version of the train step - p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0, 1)) - - # Replicate the train state on each device - unet_state = jax_utils.replicate(unet_state) - text_encoder_state = jax_utils.replicate(text_encoder_state) - vae_params = jax_utils.replicate(vae_params) - - # Train! - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - - # Scheduler and math around the number of training steps. - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - - def checkpoint(step=None): - # Create the pipeline using the trained modules and save it. - scheduler, _ = FlaxPNDMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") - safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained( - "CompVis/stable-diffusion-safety-checker", from_pt=True - ) - pipeline = FlaxStableDiffusionPipeline( - text_encoder=text_encoder, - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - - outdir = os.path.join(args.output_dir, str(step)) if step else args.output_dir - pipeline.save_pretrained( - outdir, - params={ - "text_encoder": get_params_to_save(text_encoder_state.params), - "vae": get_params_to_save(vae_params), - "unet": get_params_to_save(unet_state.params), - "safety_checker": safety_checker.params, - }, - ) - - if args.push_to_hub: - message = f"checkpoint-{step}" if step is not None else "End of training" - repo.push_to_hub(commit_message=message, blocking=False, auto_lfs_prune=True) - - global_step = 0 - - epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0) - for epoch in epochs: - # ======================== Training ================================ - - train_metrics = [] - - steps_per_epoch = len(train_dataset) // total_train_batch_size - train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False) - # train - for batch in train_dataloader: - batch = shard(batch) - unet_state, text_encoder_state, train_metric, train_rngs = p_train_step( - unet_state, text_encoder_state, vae_params, batch, train_rngs - ) - train_metrics.append(train_metric) - - train_step_progress_bar.update(jax.local_device_count()) - - global_step += 1 - if jax.process_index() == 0 and args.save_steps and global_step % args.save_steps == 0: - checkpoint(global_step) - if global_step >= args.max_train_steps: - break - - train_metric = jax_utils.unreplicate(train_metric) - - train_step_progress_bar.close() - epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})") - - if jax.process_index() == 0: - checkpoint() - - -if __name__ == "__main__": - main() diff --git a/spaces/patgpt4/MusicGen/tests/modules/test_rope.py b/spaces/patgpt4/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/paulbricman/conceptarium/backend/security.py b/spaces/paulbricman/conceptarium/backend/security.py deleted file mode 100644 index e2d3e77e59121204bb3c7e67ee40e80bfc13a8a8..0000000000000000000000000000000000000000 --- a/spaces/paulbricman/conceptarium/backend/security.py +++ /dev/null @@ -1,50 +0,0 @@ -from pathlib import Path -import json -import os - - -def auth(token, compact=False): - if not token: - return { - 'custodian': False - } - - knowledge_base_path = Path('..') / 'knowledge' - records_path = knowledge_base_path / 'records.json' - - if not records_path.exists(): - if not knowledge_base_path.exists(): - os.mkdir(knowledge_base_path) - - records = { - 'custodian_token': token - } - json.dump(records, open(records_path, 'w')) - - return { - 'custodian': True - } - else: - records = json.load(open(records_path)) - - if records['custodian_token'] == token: - return { - 'custodian': True - } - else: - microverses_path = Path('..') / 'knowledge' / 'microverses.json' - if not microverses_path.exists(): - json.dump([], open(microverses_path, 'w')) - - microverses = json.load(open(microverses_path)) - authorized_microverse = [ - e for e in microverses if e['token'] == token] - - if compact: - if len(authorized_microverse) > 0: - authorized_microverse[0].pop('embeddings') - - return { - 'custodian': False, - 'authorized_microverse': authorized_microverse - } diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/__init__.py deleted file mode 100644 index 858a41014169b8f0eb1b905fa3bb69c753a1bda5..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/__init__.py +++ /dev/null @@ -1,132 +0,0 @@ -""" -Package containing all pip commands -""" - -import importlib -from collections import namedtuple -from typing import Any, Dict, Optional - -from pip._internal.cli.base_command import Command - -CommandInfo = namedtuple("CommandInfo", "module_path, class_name, summary") - -# This dictionary does a bunch of heavy lifting for help output: -# - Enables avoiding additional (costly) imports for presenting `--help`. -# - The ordering matters for help display. -# -# Even though the module path starts with the same "pip._internal.commands" -# prefix, the full path makes testing easier (specifically when modifying -# `commands_dict` in test setup / teardown). -commands_dict: Dict[str, CommandInfo] = { - "install": CommandInfo( - "pip._internal.commands.install", - "InstallCommand", - "Install packages.", - ), - "download": CommandInfo( - "pip._internal.commands.download", - "DownloadCommand", - "Download packages.", - ), - "uninstall": CommandInfo( - "pip._internal.commands.uninstall", - "UninstallCommand", - "Uninstall packages.", - ), - "freeze": CommandInfo( - "pip._internal.commands.freeze", - "FreezeCommand", - "Output installed packages in requirements format.", - ), - "inspect": CommandInfo( - "pip._internal.commands.inspect", - "InspectCommand", - "Inspect the python environment.", - ), - "list": CommandInfo( - "pip._internal.commands.list", - "ListCommand", - "List installed packages.", - ), - "show": CommandInfo( - "pip._internal.commands.show", - "ShowCommand", - "Show information about installed packages.", - ), - "check": CommandInfo( - "pip._internal.commands.check", - "CheckCommand", - "Verify installed packages have compatible dependencies.", - ), - "config": CommandInfo( - "pip._internal.commands.configuration", - "ConfigurationCommand", - "Manage local and global configuration.", - ), - "search": CommandInfo( - "pip._internal.commands.search", - "SearchCommand", - "Search PyPI for packages.", - ), - "cache": CommandInfo( - "pip._internal.commands.cache", - "CacheCommand", - "Inspect and manage pip's wheel cache.", - ), - "index": CommandInfo( - "pip._internal.commands.index", - "IndexCommand", - "Inspect information available from package indexes.", - ), - "wheel": CommandInfo( - "pip._internal.commands.wheel", - "WheelCommand", - "Build wheels from your requirements.", - ), - "hash": CommandInfo( - "pip._internal.commands.hash", - "HashCommand", - "Compute hashes of package archives.", - ), - "completion": CommandInfo( - "pip._internal.commands.completion", - "CompletionCommand", - "A helper command used for command completion.", - ), - "debug": CommandInfo( - "pip._internal.commands.debug", - "DebugCommand", - "Show information useful for debugging.", - ), - "help": CommandInfo( - "pip._internal.commands.help", - "HelpCommand", - "Show help for commands.", - ), -} - - -def create_command(name: str, **kwargs: Any) -> Command: - """ - Create an instance of the Command class with the given name. - """ - module_path, class_name, summary = commands_dict[name] - module = importlib.import_module(module_path) - command_class = getattr(module, class_name) - command = command_class(name=name, summary=summary, **kwargs) - - return command - - -def get_similar_commands(name: str) -> Optional[str]: - """Command name auto-correct.""" - from difflib import get_close_matches - - name = name.lower() - - close_commands = get_close_matches(name, commands_dict.keys()) - - if close_commands: - return close_commands[0] - else: - return None diff --git a/spaces/pngwn/nextjs/out/index.html b/spaces/pngwn/nextjs/out/index.html deleted file mode 100644 index 0a3543861d71fbd7c8a0296488e508ab14eb9985..0000000000000000000000000000000000000000 --- a/spaces/pngwn/nextjs/out/index.html +++ /dev/null @@ -1,115 +0,0 @@ -Create Next App \ No newline at end of file diff --git a/spaces/ppsantiago/chatGPT/app.py b/spaces/ppsantiago/chatGPT/app.py deleted file mode 100644 index b3f69ff838c3326b5845cc3f06793017ed94f667..0000000000000000000000000000000000000000 --- a/spaces/ppsantiago/chatGPT/app.py +++ /dev/null @@ -1,454 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "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" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False -auth_list = [] - -if not my_api_key: - my_api_key = os.environ.get("my_api_key") -if dockerflag: - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - auth_list.append((os.environ.get("USERNAME"), os.environ.get("PASSWORD"))) - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - authflag = True - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=1): - # gr.HTML(title) - gr.HTML('

          YY专用ChatGPT

          ') - with gr.Column(scale=4): - # gr.HTML('
          Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
          ') - pass - with gr.Column(scale=4): - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - usageTxt = gr.Markdown("**发送消息** 或 **提交key** 以显示额度", elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False, visible=False): - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - gr.HTML(footer.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(sum(token_count.value[-4:])), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "YY的专用ChatGPT" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=auth_list, - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=auth_list, - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/prerna9811/Chord/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h b/spaces/prerna9811/Chord/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h deleted file mode 100644 index ed806ac4524e3b4564778822846f40faf8c488bf..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h +++ /dev/null @@ -1,183 +0,0 @@ -/* DO NOT EDIT THIS FILE - it is machine generated */ -#if defined(__APPLE__) -#include -#else -#include -#endif -/* Header for class com_portaudio_PortAudio */ - -#ifndef _Included_com_portaudio_PortAudio -#define _Included_com_portaudio_PortAudio -#ifdef __cplusplus -extern "C" { -#endif -#undef com_portaudio_PortAudio_FLAG_CLIP_OFF -#define com_portaudio_PortAudio_FLAG_CLIP_OFF 1L -#undef com_portaudio_PortAudio_FLAG_DITHER_OFF -#define com_portaudio_PortAudio_FLAG_DITHER_OFF 2L -#undef com_portaudio_PortAudio_FORMAT_FLOAT_32 -#define com_portaudio_PortAudio_FORMAT_FLOAT_32 1L -#undef com_portaudio_PortAudio_FORMAT_INT_32 -#define com_portaudio_PortAudio_FORMAT_INT_32 2L -#undef com_portaudio_PortAudio_FORMAT_INT_24 -#define com_portaudio_PortAudio_FORMAT_INT_24 4L -#undef com_portaudio_PortAudio_FORMAT_INT_16 -#define com_portaudio_PortAudio_FORMAT_INT_16 8L -#undef com_portaudio_PortAudio_FORMAT_INT_8 -#define com_portaudio_PortAudio_FORMAT_INT_8 16L -#undef com_portaudio_PortAudio_FORMAT_UINT_8 -#define com_portaudio_PortAudio_FORMAT_UINT_8 32L -#undef com_portaudio_PortAudio_HOST_API_TYPE_DEV -#define com_portaudio_PortAudio_HOST_API_TYPE_DEV 0L -#undef com_portaudio_PortAudio_HOST_API_TYPE_DIRECTSOUND -#define com_portaudio_PortAudio_HOST_API_TYPE_DIRECTSOUND 1L -#undef com_portaudio_PortAudio_HOST_API_TYPE_MME -#define com_portaudio_PortAudio_HOST_API_TYPE_MME 2L -#undef com_portaudio_PortAudio_HOST_API_TYPE_ASIO -#define com_portaudio_PortAudio_HOST_API_TYPE_ASIO 3L -#undef com_portaudio_PortAudio_HOST_API_TYPE_SOUNDMANAGER -#define com_portaudio_PortAudio_HOST_API_TYPE_SOUNDMANAGER 4L -#undef com_portaudio_PortAudio_HOST_API_TYPE_COREAUDIO -#define com_portaudio_PortAudio_HOST_API_TYPE_COREAUDIO 5L -#undef com_portaudio_PortAudio_HOST_API_TYPE_OSS -#define com_portaudio_PortAudio_HOST_API_TYPE_OSS 7L -#undef com_portaudio_PortAudio_HOST_API_TYPE_ALSA -#define com_portaudio_PortAudio_HOST_API_TYPE_ALSA 8L -#undef com_portaudio_PortAudio_HOST_API_TYPE_AL -#define com_portaudio_PortAudio_HOST_API_TYPE_AL 9L -#undef com_portaudio_PortAudio_HOST_API_TYPE_BEOS -#define com_portaudio_PortAudio_HOST_API_TYPE_BEOS 10L -#undef com_portaudio_PortAudio_HOST_API_TYPE_WDMKS -#define com_portaudio_PortAudio_HOST_API_TYPE_WDMKS 11L -#undef com_portaudio_PortAudio_HOST_API_TYPE_JACK -#define com_portaudio_PortAudio_HOST_API_TYPE_JACK 12L -#undef com_portaudio_PortAudio_HOST_API_TYPE_WASAPI -#define com_portaudio_PortAudio_HOST_API_TYPE_WASAPI 13L -#undef com_portaudio_PortAudio_HOST_API_TYPE_AUDIOSCIENCE -#define com_portaudio_PortAudio_HOST_API_TYPE_AUDIOSCIENCE 14L -#undef com_portaudio_PortAudio_HOST_API_TYPE_COUNT -#define com_portaudio_PortAudio_HOST_API_TYPE_COUNT 15L -/* - * Class: com_portaudio_PortAudio - * Method: getVersion - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getVersion - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getVersionText - * Signature: ()Ljava/lang/String; - */ -JNIEXPORT jstring JNICALL Java_com_portaudio_PortAudio_getVersionText - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: initialize - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_initialize - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: terminate - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_terminate - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getDeviceCount - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDeviceCount - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getDeviceInfo - * Signature: (ILcom/portaudio/DeviceInfo;)V - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_getDeviceInfo - (JNIEnv *, jclass, jint, jobject); - -/* - * Class: com_portaudio_PortAudio - * Method: getHostApiCount - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getHostApiCount - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getHostApiInfo - * Signature: (ILcom/portaudio/HostApiInfo;)V - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_getHostApiInfo - (JNIEnv *, jclass, jint, jobject); - -/* - * Class: com_portaudio_PortAudio - * Method: hostApiTypeIdToHostApiIndex - * Signature: (I)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_hostApiTypeIdToHostApiIndex - (JNIEnv *, jclass, jint); - -/* - * Class: com_portaudio_PortAudio - * Method: hostApiDeviceIndexToDeviceIndex - * Signature: (II)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_hostApiDeviceIndexToDeviceIndex - (JNIEnv *, jclass, jint, jint); - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultInputDevice - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultInputDevice - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultOutputDevice - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultOutputDevice - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultHostApi - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultHostApi - (JNIEnv *, jclass); - -/* - * Class: com_portaudio_PortAudio - * Method: isFormatSupported - * Signature: (Lcom/portaudio/StreamParameters;Lcom/portaudio/StreamParameters;I)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_isFormatSupported - (JNIEnv *, jclass, jobject, jobject, jint); - -/* - * Class: com_portaudio_PortAudio - * Method: openStream - * Signature: (Lcom/portaudio/BlockingStream;Lcom/portaudio/StreamParameters;Lcom/portaudio/StreamParameters;III)V - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_openStream - (JNIEnv *, jclass, jobject, jobject, jobject, jint, jint, jint); - -#ifdef __cplusplus -} -#endif -#endif diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/dsound/pa_win_ds_dynlink.c b/spaces/prerna9811/Chord/portaudio/src/hostapi/dsound/pa_win_ds_dynlink.c deleted file mode 100644 index e54df99720a726fe7c99039b2e4947de80625bd7..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/dsound/pa_win_ds_dynlink.c +++ /dev/null @@ -1,224 +0,0 @@ -/* - * Interface for dynamically loading directsound and providing a dummy - * implementation if it isn't present. - * - * Author: Ross Bencina (some portions Phil Burk & Robert Marsanyi) - * - * For PortAudio Portable Real-Time Audio Library - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2006 Phil Burk, Robert Marsanyi and Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** - @file - @ingroup hostapi_src -*/ - -#include "pa_win_ds_dynlink.h" -#include "pa_debugprint.h" - -PaWinDsDSoundEntryPoints paWinDsDSoundEntryPoints = { 0, 0, 0, 0, 0, 0, 0 }; - - -static HRESULT WINAPI DummyDllGetClassObject(REFCLSID rclsid, REFIID riid, LPVOID *ppv) -{ - (void)rclsid; /* unused parameter */ - (void)riid; /* unused parameter */ - (void)ppv; /* unused parameter */ - return CLASS_E_CLASSNOTAVAILABLE; -} - -static HRESULT WINAPI DummyDirectSoundCreate(LPGUID lpcGuidDevice, LPDIRECTSOUND *ppDS, LPUNKNOWN pUnkOuter) -{ - (void)lpcGuidDevice; /* unused parameter */ - (void)ppDS; /* unused parameter */ - (void)pUnkOuter; /* unused parameter */ - return E_NOTIMPL; -} - -static HRESULT WINAPI DummyDirectSoundEnumerateW(LPDSENUMCALLBACKW lpDSEnumCallback, LPVOID lpContext) -{ - (void)lpDSEnumCallback; /* unused parameter */ - (void)lpContext; /* unused parameter */ - return E_NOTIMPL; -} - -static HRESULT WINAPI DummyDirectSoundEnumerateA(LPDSENUMCALLBACKA lpDSEnumCallback, LPVOID lpContext) -{ - (void)lpDSEnumCallback; /* unused parameter */ - (void)lpContext; /* unused parameter */ - return E_NOTIMPL; -} - -static HRESULT WINAPI DummyDirectSoundCaptureCreate(LPGUID lpcGUID, LPDIRECTSOUNDCAPTURE *lplpDSC, LPUNKNOWN pUnkOuter) -{ - (void)lpcGUID; /* unused parameter */ - (void)lplpDSC; /* unused parameter */ - (void)pUnkOuter; /* unused parameter */ - return E_NOTIMPL; -} - -static HRESULT WINAPI DummyDirectSoundCaptureEnumerateW(LPDSENUMCALLBACKW lpDSCEnumCallback, LPVOID lpContext) -{ - (void)lpDSCEnumCallback; /* unused parameter */ - (void)lpContext; /* unused parameter */ - return E_NOTIMPL; -} - -static HRESULT WINAPI DummyDirectSoundCaptureEnumerateA(LPDSENUMCALLBACKA lpDSCEnumCallback, LPVOID lpContext) -{ - (void)lpDSCEnumCallback; /* unused parameter */ - (void)lpContext; /* unused parameter */ - return E_NOTIMPL; -} - -#ifdef PAWIN_USE_DIRECTSOUNDFULLDUPLEXCREATE -static HRESULT WINAPI DummyDirectSoundFullDuplexCreate8( - LPCGUID pcGuidCaptureDevice, - LPCGUID pcGuidRenderDevice, - LPCDSCBUFFERDESC pcDSCBufferDesc, - LPCDSBUFFERDESC pcDSBufferDesc, - HWND hWnd, - DWORD dwLevel, - LPDIRECTSOUNDFULLDUPLEX * ppDSFD, - LPDIRECTSOUNDCAPTUREBUFFER8 * ppDSCBuffer8, - LPDIRECTSOUNDBUFFER8 * ppDSBuffer8, - LPUNKNOWN pUnkOuter) -{ - (void)pcGuidCaptureDevice; /* unused parameter */ - (void)pcGuidRenderDevice; /* unused parameter */ - (void)pcDSCBufferDesc; /* unused parameter */ - (void)pcDSBufferDesc; /* unused parameter */ - (void)hWnd; /* unused parameter */ - (void)dwLevel; /* unused parameter */ - (void)ppDSFD; /* unused parameter */ - (void)ppDSCBuffer8; /* unused parameter */ - (void)ppDSBuffer8; /* unused parameter */ - (void)pUnkOuter; /* unused parameter */ - - return E_NOTIMPL; -} -#endif /* PAWIN_USE_DIRECTSOUNDFULLDUPLEXCREATE */ - -void PaWinDs_InitializeDSoundEntryPoints(void) -{ - paWinDsDSoundEntryPoints.hInstance_ = LoadLibraryA("dsound.dll"); - if( paWinDsDSoundEntryPoints.hInstance_ != NULL ) - { - paWinDsDSoundEntryPoints.DllGetClassObject = - (HRESULT (WINAPI *)(REFCLSID, REFIID , LPVOID *)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DllGetClassObject" ); - if( paWinDsDSoundEntryPoints.DllGetClassObject == NULL ) - paWinDsDSoundEntryPoints.DllGetClassObject = DummyDllGetClassObject; - - paWinDsDSoundEntryPoints.DirectSoundCreate = - (HRESULT (WINAPI *)(LPGUID, LPDIRECTSOUND *, LPUNKNOWN)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundCreate" ); - if( paWinDsDSoundEntryPoints.DirectSoundCreate == NULL ) - paWinDsDSoundEntryPoints.DirectSoundCreate = DummyDirectSoundCreate; - - paWinDsDSoundEntryPoints.DirectSoundEnumerateW = - (HRESULT (WINAPI *)(LPDSENUMCALLBACKW, LPVOID)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundEnumerateW" ); - if( paWinDsDSoundEntryPoints.DirectSoundEnumerateW == NULL ) - paWinDsDSoundEntryPoints.DirectSoundEnumerateW = DummyDirectSoundEnumerateW; - - paWinDsDSoundEntryPoints.DirectSoundEnumerateA = - (HRESULT (WINAPI *)(LPDSENUMCALLBACKA, LPVOID)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundEnumerateA" ); - if( paWinDsDSoundEntryPoints.DirectSoundEnumerateA == NULL ) - paWinDsDSoundEntryPoints.DirectSoundEnumerateA = DummyDirectSoundEnumerateA; - - paWinDsDSoundEntryPoints.DirectSoundCaptureCreate = - (HRESULT (WINAPI *)(LPGUID, LPDIRECTSOUNDCAPTURE *, LPUNKNOWN)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundCaptureCreate" ); - if( paWinDsDSoundEntryPoints.DirectSoundCaptureCreate == NULL ) - paWinDsDSoundEntryPoints.DirectSoundCaptureCreate = DummyDirectSoundCaptureCreate; - - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateW = - (HRESULT (WINAPI *)(LPDSENUMCALLBACKW, LPVOID)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundCaptureEnumerateW" ); - if( paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateW == NULL ) - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateW = DummyDirectSoundCaptureEnumerateW; - - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateA = - (HRESULT (WINAPI *)(LPDSENUMCALLBACKA, LPVOID)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundCaptureEnumerateA" ); - if( paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateA == NULL ) - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateA = DummyDirectSoundCaptureEnumerateA; - -#ifdef PAWIN_USE_DIRECTSOUNDFULLDUPLEXCREATE - paWinDsDSoundEntryPoints.DirectSoundFullDuplexCreate8 = - (HRESULT (WINAPI *)(LPCGUID, LPCGUID, LPCDSCBUFFERDESC, LPCDSBUFFERDESC, - HWND, DWORD, LPDIRECTSOUNDFULLDUPLEX *, LPDIRECTSOUNDCAPTUREBUFFER8 *, - LPDIRECTSOUNDBUFFER8 *, LPUNKNOWN)) - GetProcAddress( paWinDsDSoundEntryPoints.hInstance_, "DirectSoundFullDuplexCreate" ); - if( paWinDsDSoundEntryPoints.DirectSoundFullDuplexCreate8 == NULL ) - paWinDsDSoundEntryPoints.DirectSoundFullDuplexCreate8 = DummyDirectSoundFullDuplexCreate8; -#endif - } - else - { - DWORD errorCode = GetLastError(); // 126 (0x7E) == ERROR_MOD_NOT_FOUND - PA_DEBUG(("Couldn't load dsound.dll error code: %d \n",errorCode)); - - /* initialize with dummy entry points to make live easy when ds isn't present */ - paWinDsDSoundEntryPoints.DirectSoundCreate = DummyDirectSoundCreate; - paWinDsDSoundEntryPoints.DirectSoundEnumerateW = DummyDirectSoundEnumerateW; - paWinDsDSoundEntryPoints.DirectSoundEnumerateA = DummyDirectSoundEnumerateA; - paWinDsDSoundEntryPoints.DirectSoundCaptureCreate = DummyDirectSoundCaptureCreate; - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateW = DummyDirectSoundCaptureEnumerateW; - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateA = DummyDirectSoundCaptureEnumerateA; -#ifdef PAWIN_USE_DIRECTSOUNDFULLDUPLEXCREATE - paWinDsDSoundEntryPoints.DirectSoundFullDuplexCreate8 = DummyDirectSoundFullDuplexCreate8; -#endif - } -} - - -void PaWinDs_TerminateDSoundEntryPoints(void) -{ - if( paWinDsDSoundEntryPoints.hInstance_ != NULL ) - { - /* ensure that we crash reliably if the entry points aren't initialised */ - paWinDsDSoundEntryPoints.DirectSoundCreate = 0; - paWinDsDSoundEntryPoints.DirectSoundEnumerateW = 0; - paWinDsDSoundEntryPoints.DirectSoundEnumerateA = 0; - paWinDsDSoundEntryPoints.DirectSoundCaptureCreate = 0; - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateW = 0; - paWinDsDSoundEntryPoints.DirectSoundCaptureEnumerateA = 0; - - FreeLibrary( paWinDsDSoundEntryPoints.hInstance_ ); - paWinDsDSoundEntryPoints.hInstance_ = NULL; - } -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py deleted file mode 100644 index c1c71da08c9e1e554fa6228e5d8317508bffc021..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# GRIB stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific GRIB image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"GRIB" and prefix[7] == 1 - - -class GribStubImageFile(ImageFile.StubImageFile): - format = "GRIB" - format_description = "GRIB" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not a GRIB file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self._mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "GRIB save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GribStubImageFile.format, GribStubImageFile, _accept) -Image.register_save(GribStubImageFile.format, _save) - -Image.register_extension(GribStubImageFile.format, ".grib") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py deleted file mode 100644 index 6dee9db302f51525b69d3d28fcd704be8cce2212..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py +++ /dev/null @@ -1,2017 +0,0 @@ -""" -glifLib.py -- Generic module for reading and writing the .glif format. - -More info about the .glif format (GLyphInterchangeFormat) can be found here: - - http://unifiedfontobject.org - -The main class in this module is GlyphSet. It manages a set of .glif files -in a folder. It offers two ways to read glyph data, and one way to write -glyph data. See the class doc string for details. -""" - -from __future__ import annotations - -import logging -import enum -from warnings import warn -from collections import OrderedDict -import fs -import fs.base -import fs.errors -import fs.osfs -import fs.path -from fontTools.misc.textTools import tobytes -from fontTools.misc import plistlib -from fontTools.pens.pointPen import AbstractPointPen, PointToSegmentPen -from fontTools.ufoLib.errors import GlifLibError -from fontTools.ufoLib.filenames import userNameToFileName -from fontTools.ufoLib.validators import ( - genericTypeValidator, - colorValidator, - guidelinesValidator, - anchorsValidator, - identifierValidator, - imageValidator, - glyphLibValidator, -) -from fontTools.misc import etree -from fontTools.ufoLib import _UFOBaseIO, UFOFormatVersion -from fontTools.ufoLib.utils import numberTypes, _VersionTupleEnumMixin - - -__all__ = [ - "GlyphSet", - "GlifLibError", - "readGlyphFromString", - "writeGlyphToString", - "glyphNameToFileName", -] - -logger = logging.getLogger(__name__) - - -# --------- -# Constants -# --------- - -CONTENTS_FILENAME = "contents.plist" -LAYERINFO_FILENAME = "layerinfo.plist" - - -class GLIFFormatVersion(tuple, _VersionTupleEnumMixin, enum.Enum): - FORMAT_1_0 = (1, 0) - FORMAT_2_0 = (2, 0) - - @classmethod - def default(cls, ufoFormatVersion=None): - if ufoFormatVersion is not None: - return max(cls.supported_versions(ufoFormatVersion)) - return super().default() - - @classmethod - def supported_versions(cls, ufoFormatVersion=None): - if ufoFormatVersion is None: - # if ufo format unspecified, return all the supported GLIF formats - return super().supported_versions() - # else only return the GLIF formats supported by the given UFO format - versions = {cls.FORMAT_1_0} - if ufoFormatVersion >= UFOFormatVersion.FORMAT_3_0: - versions.add(cls.FORMAT_2_0) - return frozenset(versions) - - -# workaround for py3.11, see https://github.com/fonttools/fonttools/pull/2655 -GLIFFormatVersion.__str__ = _VersionTupleEnumMixin.__str__ - - -# ------------ -# Simple Glyph -# ------------ - - -class Glyph: - - """ - Minimal glyph object. It has no glyph attributes until either - the draw() or the drawPoints() method has been called. - """ - - def __init__(self, glyphName, glyphSet): - self.glyphName = glyphName - self.glyphSet = glyphSet - - def draw(self, pen, outputImpliedClosingLine=False): - """ - Draw this glyph onto a *FontTools* Pen. - """ - pointPen = PointToSegmentPen( - pen, outputImpliedClosingLine=outputImpliedClosingLine - ) - self.drawPoints(pointPen) - - def drawPoints(self, pointPen): - """ - Draw this glyph onto a PointPen. - """ - self.glyphSet.readGlyph(self.glyphName, self, pointPen) - - -# --------- -# Glyph Set -# --------- - - -class GlyphSet(_UFOBaseIO): - - """ - GlyphSet manages a set of .glif files inside one directory. - - GlyphSet's constructor takes a path to an existing directory as it's - first argument. Reading glyph data can either be done through the - readGlyph() method, or by using GlyphSet's dictionary interface, where - the keys are glyph names and the values are (very) simple glyph objects. - - To write a glyph to the glyph set, you use the writeGlyph() method. - The simple glyph objects returned through the dict interface do not - support writing, they are just a convenient way to get at the glyph data. - """ - - glyphClass = Glyph - - def __init__( - self, - path, - glyphNameToFileNameFunc=None, - ufoFormatVersion=None, - validateRead=True, - validateWrite=True, - expectContentsFile=False, - ): - """ - 'path' should be a path (string) to an existing local directory, or - an instance of fs.base.FS class. - - The optional 'glyphNameToFileNameFunc' argument must be a callback - function that takes two arguments: a glyph name and a list of all - existing filenames (if any exist). It should return a file name - (including the .glif extension). The glyphNameToFileName function - is called whenever a file name is created for a given glyph name. - - ``validateRead`` will validate read operations. Its default is ``True``. - ``validateWrite`` will validate write operations. Its default is ``True``. - ``expectContentsFile`` will raise a GlifLibError if a contents.plist file is - not found on the glyph set file system. This should be set to ``True`` if you - are reading an existing UFO and ``False`` if you create a fresh glyph set. - """ - try: - ufoFormatVersion = UFOFormatVersion(ufoFormatVersion) - except ValueError as e: - from fontTools.ufoLib.errors import UnsupportedUFOFormat - - raise UnsupportedUFOFormat( - f"Unsupported UFO format: {ufoFormatVersion!r}" - ) from e - - if hasattr(path, "__fspath__"): # support os.PathLike objects - path = path.__fspath__() - - if isinstance(path, str): - try: - filesystem = fs.osfs.OSFS(path) - except fs.errors.CreateFailed: - raise GlifLibError("No glyphs directory '%s'" % path) - self._shouldClose = True - elif isinstance(path, fs.base.FS): - filesystem = path - try: - filesystem.check() - except fs.errors.FilesystemClosed: - raise GlifLibError("the filesystem '%s' is closed" % filesystem) - self._shouldClose = False - else: - raise TypeError( - "Expected a path string or fs object, found %s" % type(path).__name__ - ) - try: - path = filesystem.getsyspath("/") - except fs.errors.NoSysPath: - # network or in-memory FS may not map to the local one - path = str(filesystem) - # 'dirName' is kept for backward compatibility only, but it's DEPRECATED - # as it's not guaranteed that it maps to an existing OSFS directory. - # Client could use the FS api via the `self.fs` attribute instead. - self.dirName = fs.path.parts(path)[-1] - self.fs = filesystem - # if glyphSet contains no 'contents.plist', we consider it empty - self._havePreviousFile = filesystem.exists(CONTENTS_FILENAME) - if expectContentsFile and not self._havePreviousFile: - raise GlifLibError(f"{CONTENTS_FILENAME} is missing.") - # attribute kept for backward compatibility - self.ufoFormatVersion = ufoFormatVersion.major - self.ufoFormatVersionTuple = ufoFormatVersion - if glyphNameToFileNameFunc is None: - glyphNameToFileNameFunc = glyphNameToFileName - self.glyphNameToFileName = glyphNameToFileNameFunc - self._validateRead = validateRead - self._validateWrite = validateWrite - self._existingFileNames: set[str] | None = None - self._reverseContents = None - - self.rebuildContents() - - def rebuildContents(self, validateRead=None): - """ - Rebuild the contents dict by loading contents.plist. - - ``validateRead`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validateRead is None: - validateRead = self._validateRead - contents = self._getPlist(CONTENTS_FILENAME, {}) - # validate the contents - if validateRead: - invalidFormat = False - if not isinstance(contents, dict): - invalidFormat = True - else: - for name, fileName in contents.items(): - if not isinstance(name, str): - invalidFormat = True - if not isinstance(fileName, str): - invalidFormat = True - elif not self.fs.exists(fileName): - raise GlifLibError( - "%s references a file that does not exist: %s" - % (CONTENTS_FILENAME, fileName) - ) - if invalidFormat: - raise GlifLibError("%s is not properly formatted" % CONTENTS_FILENAME) - self.contents = contents - self._existingFileNames = None - self._reverseContents = None - - def getReverseContents(self): - """ - Return a reversed dict of self.contents, mapping file names to - glyph names. This is primarily an aid for custom glyph name to file - name schemes that want to make sure they don't generate duplicate - file names. The file names are converted to lowercase so we can - reliably check for duplicates that only differ in case, which is - important for case-insensitive file systems. - """ - if self._reverseContents is None: - d = {} - for k, v in self.contents.items(): - d[v.lower()] = k - self._reverseContents = d - return self._reverseContents - - def writeContents(self): - """ - Write the contents.plist file out to disk. Call this method when - you're done writing glyphs. - """ - self._writePlist(CONTENTS_FILENAME, self.contents) - - # layer info - - def readLayerInfo(self, info, validateRead=None): - """ - ``validateRead`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validateRead is None: - validateRead = self._validateRead - infoDict = self._getPlist(LAYERINFO_FILENAME, {}) - if validateRead: - if not isinstance(infoDict, dict): - raise GlifLibError("layerinfo.plist is not properly formatted.") - infoDict = validateLayerInfoVersion3Data(infoDict) - # populate the object - for attr, value in infoDict.items(): - try: - setattr(info, attr, value) - except AttributeError: - raise GlifLibError( - "The supplied layer info object does not support setting a necessary attribute (%s)." - % attr - ) - - def writeLayerInfo(self, info, validateWrite=None): - """ - ``validateWrite`` will validate the data, by default it is set to the - class's ``validateWrite`` value, can be overridden. - """ - if validateWrite is None: - validateWrite = self._validateWrite - if self.ufoFormatVersionTuple.major < 3: - raise GlifLibError( - "layerinfo.plist is not allowed in UFO %d." - % self.ufoFormatVersionTuple.major - ) - # gather data - infoData = {} - for attr in layerInfoVersion3ValueData.keys(): - if hasattr(info, attr): - try: - value = getattr(info, attr) - except AttributeError: - raise GlifLibError( - "The supplied info object does not support getting a necessary attribute (%s)." - % attr - ) - if value is None or (attr == "lib" and not value): - continue - infoData[attr] = value - if infoData: - # validate - if validateWrite: - infoData = validateLayerInfoVersion3Data(infoData) - # write file - self._writePlist(LAYERINFO_FILENAME, infoData) - elif self._havePreviousFile and self.fs.exists(LAYERINFO_FILENAME): - # data empty, remove existing file - self.fs.remove(LAYERINFO_FILENAME) - - def getGLIF(self, glyphName): - """ - Get the raw GLIF text for a given glyph name. This only works - for GLIF files that are already on disk. - - This method is useful in situations when the raw XML needs to be - read from a glyph set for a particular glyph before fully parsing - it into an object structure via the readGlyph method. - - Raises KeyError if 'glyphName' is not in contents.plist, or - GlifLibError if the file associated with can't be found. - """ - fileName = self.contents[glyphName] - try: - return self.fs.readbytes(fileName) - except fs.errors.ResourceNotFound: - raise GlifLibError( - "The file '%s' associated with glyph '%s' in contents.plist " - "does not exist on %s" % (fileName, glyphName, self.fs) - ) - - def getGLIFModificationTime(self, glyphName): - """ - Returns the modification time for the GLIF file with 'glyphName', as - a floating point number giving the number of seconds since the epoch. - Return None if the associated file does not exist or the underlying - filesystem does not support getting modified times. - Raises KeyError if the glyphName is not in contents.plist. - """ - fileName = self.contents[glyphName] - return self.getFileModificationTime(fileName) - - # reading/writing API - - def readGlyph(self, glyphName, glyphObject=None, pointPen=None, validate=None): - """ - Read a .glif file for 'glyphName' from the glyph set. The - 'glyphObject' argument can be any kind of object (even None); - the readGlyph() method will attempt to set the following - attributes on it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional, in two ways: - - 1) An attribute *won't* be set if the .glif file doesn't - contain data for it. 'glyphObject' will have to deal - with default values itself. - 2) If setting the attribute fails with an AttributeError - (for example if the 'glyphObject' attribute is read- - only), readGlyph() will not propagate that exception, - but ignore that attribute. - - To retrieve outline information, you need to pass an object - conforming to the PointPen protocol as the 'pointPen' argument. - This argument may be None if you don't need the outline data. - - readGlyph() will raise KeyError if the glyph is not present in - the glyph set. - - ``validate`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validate is None: - validate = self._validateRead - text = self.getGLIF(glyphName) - try: - tree = _glifTreeFromString(text) - formatVersions = GLIFFormatVersion.supported_versions( - self.ufoFormatVersionTuple - ) - _readGlyphFromTree( - tree, - glyphObject, - pointPen, - formatVersions=formatVersions, - validate=validate, - ) - except GlifLibError as glifLibError: - # Re-raise with a note that gives extra context, describing where - # the error occurred. - fileName = self.contents[glyphName] - try: - glifLocation = f"'{self.fs.getsyspath(fileName)}'" - except fs.errors.NoSysPath: - # Network or in-memory FS may not map to a local path, so use - # the best string representation we have. - glifLocation = f"'{fileName}' from '{str(self.fs)}'" - - glifLibError._add_note( - f"The issue is in glyph '{glyphName}', located in {glifLocation}." - ) - raise - - def writeGlyph( - self, - glyphName, - glyphObject=None, - drawPointsFunc=None, - formatVersion=None, - validate=None, - ): - """ - Write a .glif file for 'glyphName' to the glyph set. The - 'glyphObject' argument can be any kind of object (even None); - the writeGlyph() method will attempt to get the following - attributes from it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional: if 'glyphObject' doesn't - have the attribute, it will simply be skipped. - - To write outline data to the .glif file, writeGlyph() needs - a function (any callable object actually) that will take one - argument: an object that conforms to the PointPen protocol. - The function will be called by writeGlyph(); it has to call the - proper PointPen methods to transfer the outline to the .glif file. - - The GLIF format version will be chosen based on the ufoFormatVersion - passed during the creation of this object. If a particular format - version is desired, it can be passed with the formatVersion argument. - The formatVersion argument accepts either a tuple of integers for - (major, minor), or a single integer for the major digit only (with - minor digit implied as 0). - - An UnsupportedGLIFFormat exception is raised if the requested GLIF - formatVersion is not supported. - - ``validate`` will validate the data, by default it is set to the - class's ``validateWrite`` value, can be overridden. - """ - if formatVersion is None: - formatVersion = GLIFFormatVersion.default(self.ufoFormatVersionTuple) - else: - try: - formatVersion = GLIFFormatVersion(formatVersion) - except ValueError as e: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - f"Unsupported GLIF format version: {formatVersion!r}" - ) from e - if formatVersion not in GLIFFormatVersion.supported_versions( - self.ufoFormatVersionTuple - ): - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - f"Unsupported GLIF format version ({formatVersion!s}) " - f"for UFO format version {self.ufoFormatVersionTuple!s}." - ) - if validate is None: - validate = self._validateWrite - fileName = self.contents.get(glyphName) - if fileName is None: - if self._existingFileNames is None: - self._existingFileNames = { - fileName.lower() for fileName in self.contents.values() - } - fileName = self.glyphNameToFileName(glyphName, self._existingFileNames) - self.contents[glyphName] = fileName - self._existingFileNames.add(fileName.lower()) - if self._reverseContents is not None: - self._reverseContents[fileName.lower()] = glyphName - data = _writeGlyphToBytes( - glyphName, - glyphObject, - drawPointsFunc, - formatVersion=formatVersion, - validate=validate, - ) - if ( - self._havePreviousFile - and self.fs.exists(fileName) - and data == self.fs.readbytes(fileName) - ): - return - self.fs.writebytes(fileName, data) - - def deleteGlyph(self, glyphName): - """Permanently delete the glyph from the glyph set on disk. Will - raise KeyError if the glyph is not present in the glyph set. - """ - fileName = self.contents[glyphName] - self.fs.remove(fileName) - if self._existingFileNames is not None: - self._existingFileNames.remove(fileName.lower()) - if self._reverseContents is not None: - del self._reverseContents[fileName.lower()] - del self.contents[glyphName] - - # dict-like support - - def keys(self): - return list(self.contents.keys()) - - def has_key(self, glyphName): - return glyphName in self.contents - - __contains__ = has_key - - def __len__(self): - return len(self.contents) - - def __getitem__(self, glyphName): - if glyphName not in self.contents: - raise KeyError(glyphName) - return self.glyphClass(glyphName, self) - - # quickly fetch unicode values - - def getUnicodes(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to lists containing - the unicode value[s] for that glyph, if any. This parses the .glif - files partially, so it is a lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - unicodes = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - unicodes[glyphName] = _fetchUnicodes(text) - return unicodes - - def getComponentReferences(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to lists containing the - base glyph name of components in the glyph. This parses the .glif - files partially, so it is a lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - components = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - components[glyphName] = _fetchComponentBases(text) - return components - - def getImageReferences(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to the file name of the image - referenced by the glyph. This parses the .glif files partially, so it is a - lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - images = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - images[glyphName] = _fetchImageFileName(text) - return images - - def close(self): - if self._shouldClose: - self.fs.close() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - self.close() - - -# ----------------------- -# Glyph Name to File Name -# ----------------------- - - -def glyphNameToFileName(glyphName, existingFileNames): - """ - Wrapper around the userNameToFileName function in filenames.py - - Note that existingFileNames should be a set for large glyphsets - or performance will suffer. - """ - if existingFileNames is None: - existingFileNames = set() - return userNameToFileName(glyphName, existing=existingFileNames, suffix=".glif") - - -# ----------------------- -# GLIF To and From String -# ----------------------- - - -def readGlyphFromString( - aString, - glyphObject=None, - pointPen=None, - formatVersions=None, - validate=True, -): - """ - Read .glif data from a string into a glyph object. - - The 'glyphObject' argument can be any kind of object (even None); - the readGlyphFromString() method will attempt to set the following - attributes on it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional, in two ways: - - 1) An attribute *won't* be set if the .glif file doesn't - contain data for it. 'glyphObject' will have to deal - with default values itself. - 2) If setting the attribute fails with an AttributeError - (for example if the 'glyphObject' attribute is read- - only), readGlyphFromString() will not propagate that - exception, but ignore that attribute. - - To retrieve outline information, you need to pass an object - conforming to the PointPen protocol as the 'pointPen' argument. - This argument may be None if you don't need the outline data. - - The formatVersions optional argument define the GLIF format versions - that are allowed to be read. - The type is Optional[Iterable[Tuple[int, int], int]]. It can contain - either integers (for the major versions to be allowed, with minor - digits defaulting to 0), or tuples of integers to specify both - (major, minor) versions. - By default when formatVersions is None all the GLIF format versions - currently defined are allowed to be read. - - ``validate`` will validate the read data. It is set to ``True`` by default. - """ - tree = _glifTreeFromString(aString) - - if formatVersions is None: - validFormatVersions = GLIFFormatVersion.supported_versions() - else: - validFormatVersions, invalidFormatVersions = set(), set() - for v in formatVersions: - try: - formatVersion = GLIFFormatVersion(v) - except ValueError: - invalidFormatVersions.add(v) - else: - validFormatVersions.add(formatVersion) - if not validFormatVersions: - raise ValueError( - "None of the requested GLIF formatVersions are supported: " - f"{formatVersions!r}" - ) - - _readGlyphFromTree( - tree, - glyphObject, - pointPen, - formatVersions=validFormatVersions, - validate=validate, - ) - - -def _writeGlyphToBytes( - glyphName, - glyphObject=None, - drawPointsFunc=None, - writer=None, - formatVersion=None, - validate=True, -): - """Return .glif data for a glyph as a UTF-8 encoded bytes string.""" - try: - formatVersion = GLIFFormatVersion(formatVersion) - except ValueError: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - "Unsupported GLIF format version: {formatVersion!r}" - ) - # start - if validate and not isinstance(glyphName, str): - raise GlifLibError("The glyph name is not properly formatted.") - if validate and len(glyphName) == 0: - raise GlifLibError("The glyph name is empty.") - glyphAttrs = OrderedDict( - [("name", glyphName), ("format", repr(formatVersion.major))] - ) - if formatVersion.minor != 0: - glyphAttrs["formatMinor"] = repr(formatVersion.minor) - root = etree.Element("glyph", glyphAttrs) - identifiers = set() - # advance - _writeAdvance(glyphObject, root, validate) - # unicodes - if getattr(glyphObject, "unicodes", None): - _writeUnicodes(glyphObject, root, validate) - # note - if getattr(glyphObject, "note", None): - _writeNote(glyphObject, root, validate) - # image - if formatVersion.major >= 2 and getattr(glyphObject, "image", None): - _writeImage(glyphObject, root, validate) - # guidelines - if formatVersion.major >= 2 and getattr(glyphObject, "guidelines", None): - _writeGuidelines(glyphObject, root, identifiers, validate) - # anchors - anchors = getattr(glyphObject, "anchors", None) - if formatVersion.major >= 2 and anchors: - _writeAnchors(glyphObject, root, identifiers, validate) - # outline - if drawPointsFunc is not None: - outline = etree.SubElement(root, "outline") - pen = GLIFPointPen(outline, identifiers=identifiers, validate=validate) - drawPointsFunc(pen) - if formatVersion.major == 1 and anchors: - _writeAnchorsFormat1(pen, anchors, validate) - # prevent lxml from writing self-closing tags - if not len(outline): - outline.text = "\n " - # lib - if getattr(glyphObject, "lib", None): - _writeLib(glyphObject, root, validate) - # return the text - data = etree.tostring( - root, encoding="UTF-8", xml_declaration=True, pretty_print=True - ) - return data - - -def writeGlyphToString( - glyphName, - glyphObject=None, - drawPointsFunc=None, - formatVersion=None, - validate=True, -): - """ - Return .glif data for a glyph as a string. The XML declaration's - encoding is always set to "UTF-8". - The 'glyphObject' argument can be any kind of object (even None); - the writeGlyphToString() method will attempt to get the following - attributes from it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional: if 'glyphObject' doesn't - have the attribute, it will simply be skipped. - - To write outline data to the .glif file, writeGlyphToString() needs - a function (any callable object actually) that will take one - argument: an object that conforms to the PointPen protocol. - The function will be called by writeGlyphToString(); it has to call the - proper PointPen methods to transfer the outline to the .glif file. - - The GLIF format version can be specified with the formatVersion argument. - This accepts either a tuple of integers for (major, minor), or a single - integer for the major digit only (with minor digit implied as 0). - By default when formatVesion is None the latest GLIF format version will - be used; currently it's 2.0, which is equivalent to formatVersion=(2, 0). - - An UnsupportedGLIFFormat exception is raised if the requested UFO - formatVersion is not supported. - - ``validate`` will validate the written data. It is set to ``True`` by default. - """ - data = _writeGlyphToBytes( - glyphName, - glyphObject=glyphObject, - drawPointsFunc=drawPointsFunc, - formatVersion=formatVersion, - validate=validate, - ) - return data.decode("utf-8") - - -def _writeAdvance(glyphObject, element, validate): - width = getattr(glyphObject, "width", None) - if width is not None: - if validate and not isinstance(width, numberTypes): - raise GlifLibError("width attribute must be int or float") - if width == 0: - width = None - height = getattr(glyphObject, "height", None) - if height is not None: - if validate and not isinstance(height, numberTypes): - raise GlifLibError("height attribute must be int or float") - if height == 0: - height = None - if width is not None and height is not None: - etree.SubElement( - element, - "advance", - OrderedDict([("height", repr(height)), ("width", repr(width))]), - ) - elif width is not None: - etree.SubElement(element, "advance", dict(width=repr(width))) - elif height is not None: - etree.SubElement(element, "advance", dict(height=repr(height))) - - -def _writeUnicodes(glyphObject, element, validate): - unicodes = getattr(glyphObject, "unicodes", None) - if validate and isinstance(unicodes, int): - unicodes = [unicodes] - seen = set() - for code in unicodes: - if validate and not isinstance(code, int): - raise GlifLibError("unicode values must be int") - if code in seen: - continue - seen.add(code) - hexCode = "%04X" % code - etree.SubElement(element, "unicode", dict(hex=hexCode)) - - -def _writeNote(glyphObject, element, validate): - note = getattr(glyphObject, "note", None) - if validate and not isinstance(note, str): - raise GlifLibError("note attribute must be str") - note = note.strip() - note = "\n" + note + "\n" - etree.SubElement(element, "note").text = note - - -def _writeImage(glyphObject, element, validate): - image = getattr(glyphObject, "image", None) - if validate and not imageValidator(image): - raise GlifLibError( - "image attribute must be a dict or dict-like object with the proper structure." - ) - attrs = OrderedDict([("fileName", image["fileName"])]) - for attr, default in _transformationInfo: - value = image.get(attr, default) - if value != default: - attrs[attr] = repr(value) - color = image.get("color") - if color is not None: - attrs["color"] = color - etree.SubElement(element, "image", attrs) - - -def _writeGuidelines(glyphObject, element, identifiers, validate): - guidelines = getattr(glyphObject, "guidelines", []) - if validate and not guidelinesValidator(guidelines): - raise GlifLibError("guidelines attribute does not have the proper structure.") - for guideline in guidelines: - attrs = OrderedDict() - x = guideline.get("x") - if x is not None: - attrs["x"] = repr(x) - y = guideline.get("y") - if y is not None: - attrs["y"] = repr(y) - angle = guideline.get("angle") - if angle is not None: - attrs["angle"] = repr(angle) - name = guideline.get("name") - if name is not None: - attrs["name"] = name - color = guideline.get("color") - if color is not None: - attrs["color"] = color - identifier = guideline.get("identifier") - if identifier is not None: - if validate and identifier in identifiers: - raise GlifLibError("identifier used more than once: %s" % identifier) - attrs["identifier"] = identifier - identifiers.add(identifier) - etree.SubElement(element, "guideline", attrs) - - -def _writeAnchorsFormat1(pen, anchors, validate): - if validate and not anchorsValidator(anchors): - raise GlifLibError("anchors attribute does not have the proper structure.") - for anchor in anchors: - attrs = {} - x = anchor["x"] - attrs["x"] = repr(x) - y = anchor["y"] - attrs["y"] = repr(y) - name = anchor.get("name") - if name is not None: - attrs["name"] = name - pen.beginPath() - pen.addPoint((x, y), segmentType="move", name=name) - pen.endPath() - - -def _writeAnchors(glyphObject, element, identifiers, validate): - anchors = getattr(glyphObject, "anchors", []) - if validate and not anchorsValidator(anchors): - raise GlifLibError("anchors attribute does not have the proper structure.") - for anchor in anchors: - attrs = OrderedDict() - x = anchor["x"] - attrs["x"] = repr(x) - y = anchor["y"] - attrs["y"] = repr(y) - name = anchor.get("name") - if name is not None: - attrs["name"] = name - color = anchor.get("color") - if color is not None: - attrs["color"] = color - identifier = anchor.get("identifier") - if identifier is not None: - if validate and identifier in identifiers: - raise GlifLibError("identifier used more than once: %s" % identifier) - attrs["identifier"] = identifier - identifiers.add(identifier) - etree.SubElement(element, "anchor", attrs) - - -def _writeLib(glyphObject, element, validate): - lib = getattr(glyphObject, "lib", None) - if not lib: - # don't write empty lib - return - if validate: - valid, message = glyphLibValidator(lib) - if not valid: - raise GlifLibError(message) - if not isinstance(lib, dict): - lib = dict(lib) - # plist inside GLIF begins with 2 levels of indentation - e = plistlib.totree(lib, indent_level=2) - etree.SubElement(element, "lib").append(e) - - -# ----------------------- -# layerinfo.plist Support -# ----------------------- - -layerInfoVersion3ValueData = { - "color": dict(type=str, valueValidator=colorValidator), - "lib": dict(type=dict, valueValidator=genericTypeValidator), -} - - -def validateLayerInfoVersion3ValueForAttribute(attr, value): - """ - This performs very basic validation of the value for attribute - following the UFO 3 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the value - is of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - if attr not in layerInfoVersion3ValueData: - return False - dataValidationDict = layerInfoVersion3ValueData[attr] - valueType = dataValidationDict.get("type") - validator = dataValidationDict.get("valueValidator") - valueOptions = dataValidationDict.get("valueOptions") - # have specific options for the validator - if valueOptions is not None: - isValidValue = validator(value, valueOptions) - # no specific options - else: - if validator == genericTypeValidator: - isValidValue = validator(value, valueType) - else: - isValidValue = validator(value) - return isValidValue - - -def validateLayerInfoVersion3Data(infoData): - """ - This performs very basic validation of the value for infoData - following the UFO 3 layerinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the values - are of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - for attr, value in infoData.items(): - if attr not in layerInfoVersion3ValueData: - raise GlifLibError("Unknown attribute %s." % attr) - isValidValue = validateLayerInfoVersion3ValueForAttribute(attr, value) - if not isValidValue: - raise GlifLibError(f"Invalid value for attribute {attr} ({value!r}).") - return infoData - - -# ----------------- -# GLIF Tree Support -# ----------------- - - -def _glifTreeFromFile(aFile): - if etree._have_lxml: - tree = etree.parse(aFile, parser=etree.XMLParser(remove_comments=True)) - else: - tree = etree.parse(aFile) - root = tree.getroot() - if root.tag != "glyph": - raise GlifLibError("The GLIF is not properly formatted.") - if root.text and root.text.strip() != "": - raise GlifLibError("Invalid GLIF structure.") - return root - - -def _glifTreeFromString(aString): - data = tobytes(aString, encoding="utf-8") - try: - if etree._have_lxml: - root = etree.fromstring(data, parser=etree.XMLParser(remove_comments=True)) - else: - root = etree.fromstring(data) - except Exception as etree_exception: - raise GlifLibError("GLIF contains invalid XML.") from etree_exception - - if root.tag != "glyph": - raise GlifLibError("The GLIF is not properly formatted.") - if root.text and root.text.strip() != "": - raise GlifLibError("Invalid GLIF structure.") - return root - - -def _readGlyphFromTree( - tree, - glyphObject=None, - pointPen=None, - formatVersions=GLIFFormatVersion.supported_versions(), - validate=True, -): - # check the format version - formatVersionMajor = tree.get("format") - if validate and formatVersionMajor is None: - raise GlifLibError("Unspecified format version in GLIF.") - formatVersionMinor = tree.get("formatMinor", 0) - try: - formatVersion = GLIFFormatVersion( - (int(formatVersionMajor), int(formatVersionMinor)) - ) - except ValueError as e: - msg = "Unsupported GLIF format: %s.%s" % ( - formatVersionMajor, - formatVersionMinor, - ) - if validate: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat(msg) from e - # warn but continue using the latest supported format - formatVersion = GLIFFormatVersion.default() - logger.warning( - "%s. Assuming the latest supported version (%s). " - "Some data may be skipped or parsed incorrectly.", - msg, - formatVersion, - ) - - if validate and formatVersion not in formatVersions: - raise GlifLibError(f"Forbidden GLIF format version: {formatVersion!s}") - - try: - readGlyphFromTree = _READ_GLYPH_FROM_TREE_FUNCS[formatVersion] - except KeyError: - raise NotImplementedError(formatVersion) - - readGlyphFromTree( - tree=tree, - glyphObject=glyphObject, - pointPen=pointPen, - validate=validate, - formatMinor=formatVersion.minor, - ) - - -def _readGlyphFromTreeFormat1( - tree, glyphObject=None, pointPen=None, validate=None, **kwargs -): - # get the name - _readName(glyphObject, tree, validate) - # populate the sub elements - unicodes = [] - haveSeenAdvance = haveSeenOutline = haveSeenLib = haveSeenNote = False - for element in tree: - if element.tag == "outline": - if validate: - if haveSeenOutline: - raise GlifLibError("The outline element occurs more than once.") - if element.attrib: - raise GlifLibError( - "The outline element contains unknown attributes." - ) - if element.text and element.text.strip() != "": - raise GlifLibError("Invalid outline structure.") - haveSeenOutline = True - buildOutlineFormat1(glyphObject, pointPen, element, validate) - elif glyphObject is None: - continue - elif element.tag == "advance": - if validate and haveSeenAdvance: - raise GlifLibError("The advance element occurs more than once.") - haveSeenAdvance = True - _readAdvance(glyphObject, element) - elif element.tag == "unicode": - try: - v = element.get("hex") - v = int(v, 16) - if v not in unicodes: - unicodes.append(v) - except ValueError: - raise GlifLibError( - "Illegal value for hex attribute of unicode element." - ) - elif element.tag == "note": - if validate and haveSeenNote: - raise GlifLibError("The note element occurs more than once.") - haveSeenNote = True - _readNote(glyphObject, element) - elif element.tag == "lib": - if validate and haveSeenLib: - raise GlifLibError("The lib element occurs more than once.") - haveSeenLib = True - _readLib(glyphObject, element, validate) - else: - raise GlifLibError("Unknown element in GLIF: %s" % element) - # set the collected unicodes - if unicodes: - _relaxedSetattr(glyphObject, "unicodes", unicodes) - - -def _readGlyphFromTreeFormat2( - tree, glyphObject=None, pointPen=None, validate=None, formatMinor=0 -): - # get the name - _readName(glyphObject, tree, validate) - # populate the sub elements - unicodes = [] - guidelines = [] - anchors = [] - haveSeenAdvance = ( - haveSeenImage - ) = haveSeenOutline = haveSeenLib = haveSeenNote = False - identifiers = set() - for element in tree: - if element.tag == "outline": - if validate: - if haveSeenOutline: - raise GlifLibError("The outline element occurs more than once.") - if element.attrib: - raise GlifLibError( - "The outline element contains unknown attributes." - ) - if element.text and element.text.strip() != "": - raise GlifLibError("Invalid outline structure.") - haveSeenOutline = True - if pointPen is not None: - buildOutlineFormat2( - glyphObject, pointPen, element, identifiers, validate - ) - elif glyphObject is None: - continue - elif element.tag == "advance": - if validate and haveSeenAdvance: - raise GlifLibError("The advance element occurs more than once.") - haveSeenAdvance = True - _readAdvance(glyphObject, element) - elif element.tag == "unicode": - try: - v = element.get("hex") - v = int(v, 16) - if v not in unicodes: - unicodes.append(v) - except ValueError: - raise GlifLibError( - "Illegal value for hex attribute of unicode element." - ) - elif element.tag == "guideline": - if validate and len(element): - raise GlifLibError("Unknown children in guideline element.") - attrib = dict(element.attrib) - for attr in ("x", "y", "angle"): - if attr in attrib: - attrib[attr] = _number(attrib[attr]) - guidelines.append(attrib) - elif element.tag == "anchor": - if validate and len(element): - raise GlifLibError("Unknown children in anchor element.") - attrib = dict(element.attrib) - for attr in ("x", "y"): - if attr in element.attrib: - attrib[attr] = _number(attrib[attr]) - anchors.append(attrib) - elif element.tag == "image": - if validate: - if haveSeenImage: - raise GlifLibError("The image element occurs more than once.") - if len(element): - raise GlifLibError("Unknown children in image element.") - haveSeenImage = True - _readImage(glyphObject, element, validate) - elif element.tag == "note": - if validate and haveSeenNote: - raise GlifLibError("The note element occurs more than once.") - haveSeenNote = True - _readNote(glyphObject, element) - elif element.tag == "lib": - if validate and haveSeenLib: - raise GlifLibError("The lib element occurs more than once.") - haveSeenLib = True - _readLib(glyphObject, element, validate) - else: - raise GlifLibError("Unknown element in GLIF: %s" % element) - # set the collected unicodes - if unicodes: - _relaxedSetattr(glyphObject, "unicodes", unicodes) - # set the collected guidelines - if guidelines: - if validate and not guidelinesValidator(guidelines, identifiers): - raise GlifLibError("The guidelines are improperly formatted.") - _relaxedSetattr(glyphObject, "guidelines", guidelines) - # set the collected anchors - if anchors: - if validate and not anchorsValidator(anchors, identifiers): - raise GlifLibError("The anchors are improperly formatted.") - _relaxedSetattr(glyphObject, "anchors", anchors) - - -_READ_GLYPH_FROM_TREE_FUNCS = { - GLIFFormatVersion.FORMAT_1_0: _readGlyphFromTreeFormat1, - GLIFFormatVersion.FORMAT_2_0: _readGlyphFromTreeFormat2, -} - - -def _readName(glyphObject, root, validate): - glyphName = root.get("name") - if validate and not glyphName: - raise GlifLibError("Empty glyph name in GLIF.") - if glyphName and glyphObject is not None: - _relaxedSetattr(glyphObject, "name", glyphName) - - -def _readAdvance(glyphObject, advance): - width = _number(advance.get("width", 0)) - _relaxedSetattr(glyphObject, "width", width) - height = _number(advance.get("height", 0)) - _relaxedSetattr(glyphObject, "height", height) - - -def _readNote(glyphObject, note): - lines = note.text.split("\n") - note = "\n".join(line.strip() for line in lines if line.strip()) - _relaxedSetattr(glyphObject, "note", note) - - -def _readLib(glyphObject, lib, validate): - assert len(lib) == 1 - child = lib[0] - plist = plistlib.fromtree(child) - if validate: - valid, message = glyphLibValidator(plist) - if not valid: - raise GlifLibError(message) - _relaxedSetattr(glyphObject, "lib", plist) - - -def _readImage(glyphObject, image, validate): - imageData = dict(image.attrib) - for attr, default in _transformationInfo: - value = imageData.get(attr, default) - imageData[attr] = _number(value) - if validate and not imageValidator(imageData): - raise GlifLibError("The image element is not properly formatted.") - _relaxedSetattr(glyphObject, "image", imageData) - - -# ---------------- -# GLIF to PointPen -# ---------------- - -contourAttributesFormat2 = {"identifier"} -componentAttributesFormat1 = { - "base", - "xScale", - "xyScale", - "yxScale", - "yScale", - "xOffset", - "yOffset", -} -componentAttributesFormat2 = componentAttributesFormat1 | {"identifier"} -pointAttributesFormat1 = {"x", "y", "type", "smooth", "name"} -pointAttributesFormat2 = pointAttributesFormat1 | {"identifier"} -pointSmoothOptions = {"no", "yes"} -pointTypeOptions = {"move", "line", "offcurve", "curve", "qcurve"} - -# format 1 - - -def buildOutlineFormat1(glyphObject, pen, outline, validate): - anchors = [] - for element in outline: - if element.tag == "contour": - if len(element) == 1: - point = element[0] - if point.tag == "point": - anchor = _buildAnchorFormat1(point, validate) - if anchor is not None: - anchors.append(anchor) - continue - if pen is not None: - _buildOutlineContourFormat1(pen, element, validate) - elif element.tag == "component": - if pen is not None: - _buildOutlineComponentFormat1(pen, element, validate) - else: - raise GlifLibError("Unknown element in outline element: %s" % element) - if glyphObject is not None and anchors: - if validate and not anchorsValidator(anchors): - raise GlifLibError("GLIF 1 anchors are not properly formatted.") - _relaxedSetattr(glyphObject, "anchors", anchors) - - -def _buildAnchorFormat1(point, validate): - if point.get("type") != "move": - return None - name = point.get("name") - if name is None: - return None - x = point.get("x") - y = point.get("y") - if validate and x is None: - raise GlifLibError("Required x attribute is missing in point element.") - if validate and y is None: - raise GlifLibError("Required y attribute is missing in point element.") - x = _number(x) - y = _number(y) - anchor = dict(x=x, y=y, name=name) - return anchor - - -def _buildOutlineContourFormat1(pen, contour, validate): - if validate and contour.attrib: - raise GlifLibError("Unknown attributes in contour element.") - pen.beginPath() - if len(contour): - massaged = _validateAndMassagePointStructures( - contour, - pointAttributesFormat1, - openContourOffCurveLeniency=True, - validate=validate, - ) - _buildOutlinePointsFormat1(pen, massaged) - pen.endPath() - - -def _buildOutlinePointsFormat1(pen, contour): - for point in contour: - x = point["x"] - y = point["y"] - segmentType = point["segmentType"] - smooth = point["smooth"] - name = point["name"] - pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name) - - -def _buildOutlineComponentFormat1(pen, component, validate): - if validate: - if len(component): - raise GlifLibError("Unknown child elements of component element.") - for attr in component.attrib.keys(): - if attr not in componentAttributesFormat1: - raise GlifLibError("Unknown attribute in component element: %s" % attr) - baseGlyphName = component.get("base") - if validate and baseGlyphName is None: - raise GlifLibError("The base attribute is not defined in the component.") - transformation = [] - for attr, default in _transformationInfo: - value = component.get(attr) - if value is None: - value = default - else: - value = _number(value) - transformation.append(value) - pen.addComponent(baseGlyphName, tuple(transformation)) - - -# format 2 - - -def buildOutlineFormat2(glyphObject, pen, outline, identifiers, validate): - for element in outline: - if element.tag == "contour": - _buildOutlineContourFormat2(pen, element, identifiers, validate) - elif element.tag == "component": - _buildOutlineComponentFormat2(pen, element, identifiers, validate) - else: - raise GlifLibError("Unknown element in outline element: %s" % element.tag) - - -def _buildOutlineContourFormat2(pen, contour, identifiers, validate): - if validate: - for attr in contour.attrib.keys(): - if attr not in contourAttributesFormat2: - raise GlifLibError("Unknown attribute in contour element: %s" % attr) - identifier = contour.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "The contour identifier %s is not valid." % identifier - ) - identifiers.add(identifier) - try: - pen.beginPath(identifier=identifier) - except TypeError: - pen.beginPath() - warn( - "The beginPath method needs an identifier kwarg. The contour's identifier value has been discarded.", - DeprecationWarning, - ) - if len(contour): - massaged = _validateAndMassagePointStructures( - contour, pointAttributesFormat2, validate=validate - ) - _buildOutlinePointsFormat2(pen, massaged, identifiers, validate) - pen.endPath() - - -def _buildOutlinePointsFormat2(pen, contour, identifiers, validate): - for point in contour: - x = point["x"] - y = point["y"] - segmentType = point["segmentType"] - smooth = point["smooth"] - name = point["name"] - identifier = point.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError("The identifier %s is not valid." % identifier) - identifiers.add(identifier) - try: - pen.addPoint( - (x, y), - segmentType=segmentType, - smooth=smooth, - name=name, - identifier=identifier, - ) - except TypeError: - pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name) - warn( - "The addPoint method needs an identifier kwarg. The point's identifier value has been discarded.", - DeprecationWarning, - ) - - -def _buildOutlineComponentFormat2(pen, component, identifiers, validate): - if validate: - if len(component): - raise GlifLibError("Unknown child elements of component element.") - for attr in component.attrib.keys(): - if attr not in componentAttributesFormat2: - raise GlifLibError("Unknown attribute in component element: %s" % attr) - baseGlyphName = component.get("base") - if validate and baseGlyphName is None: - raise GlifLibError("The base attribute is not defined in the component.") - transformation = [] - for attr, default in _transformationInfo: - value = component.get(attr) - if value is None: - value = default - else: - value = _number(value) - transformation.append(value) - identifier = component.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if validate and not identifierValidator(identifier): - raise GlifLibError("The identifier %s is not valid." % identifier) - identifiers.add(identifier) - try: - pen.addComponent(baseGlyphName, tuple(transformation), identifier=identifier) - except TypeError: - pen.addComponent(baseGlyphName, tuple(transformation)) - warn( - "The addComponent method needs an identifier kwarg. The component's identifier value has been discarded.", - DeprecationWarning, - ) - - -# all formats - - -def _validateAndMassagePointStructures( - contour, pointAttributes, openContourOffCurveLeniency=False, validate=True -): - if not len(contour): - return - # store some data for later validation - lastOnCurvePoint = None - haveOffCurvePoint = False - # validate and massage the individual point elements - massaged = [] - for index, element in enumerate(contour): - # not - if element.tag != "point": - raise GlifLibError( - "Unknown child element (%s) of contour element." % element.tag - ) - point = dict(element.attrib) - massaged.append(point) - if validate: - # unknown attributes - for attr in point.keys(): - if attr not in pointAttributes: - raise GlifLibError("Unknown attribute in point element: %s" % attr) - # search for unknown children - if len(element): - raise GlifLibError("Unknown child elements in point element.") - # x and y are required - for attr in ("x", "y"): - try: - point[attr] = _number(point[attr]) - except KeyError as e: - raise GlifLibError( - f"Required {attr} attribute is missing in point element." - ) from e - # segment type - pointType = point.pop("type", "offcurve") - if validate and pointType not in pointTypeOptions: - raise GlifLibError("Unknown point type: %s" % pointType) - if pointType == "offcurve": - pointType = None - point["segmentType"] = pointType - if pointType is None: - haveOffCurvePoint = True - else: - lastOnCurvePoint = index - # move can only occur as the first point - if validate and pointType == "move" and index != 0: - raise GlifLibError( - "A move point occurs after the first point in the contour." - ) - # smooth is optional - smooth = point.get("smooth", "no") - if validate and smooth is not None: - if smooth not in pointSmoothOptions: - raise GlifLibError("Unknown point smooth value: %s" % smooth) - smooth = smooth == "yes" - point["smooth"] = smooth - # smooth can only be applied to curve and qcurve - if validate and smooth and pointType is None: - raise GlifLibError("smooth attribute set in an offcurve point.") - # name is optional - if "name" not in element.attrib: - point["name"] = None - if openContourOffCurveLeniency: - # remove offcurves that precede a move. this is technically illegal, - # but we let it slide because there are fonts out there in the wild like this. - if massaged[0]["segmentType"] == "move": - count = 0 - for point in reversed(massaged): - if point["segmentType"] is None: - count += 1 - else: - break - if count: - massaged = massaged[:-count] - # validate the off-curves in the segments - if validate and haveOffCurvePoint and lastOnCurvePoint is not None: - # we only care about how many offCurves there are before an onCurve - # filter out the trailing offCurves - offCurvesCount = len(massaged) - 1 - lastOnCurvePoint - for point in massaged: - segmentType = point["segmentType"] - if segmentType is None: - offCurvesCount += 1 - else: - if offCurvesCount: - # move and line can't be preceded by off-curves - if segmentType == "move": - # this will have been filtered out already - raise GlifLibError("move can not have an offcurve.") - elif segmentType == "line": - raise GlifLibError("line can not have an offcurve.") - elif segmentType == "curve": - if offCurvesCount > 2: - raise GlifLibError("Too many offcurves defined for curve.") - elif segmentType == "qcurve": - pass - else: - # unknown segment type. it'll be caught later. - pass - offCurvesCount = 0 - return massaged - - -# --------------------- -# Misc Helper Functions -# --------------------- - - -def _relaxedSetattr(object, attr, value): - try: - setattr(object, attr, value) - except AttributeError: - pass - - -def _number(s): - """ - Given a numeric string, return an integer or a float, whichever - the string indicates. _number("1") will return the integer 1, - _number("1.0") will return the float 1.0. - - >>> _number("1") - 1 - >>> _number("1.0") - 1.0 - >>> _number("a") # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - GlifLibError: Could not convert a to an int or float. - """ - try: - n = int(s) - return n - except ValueError: - pass - try: - n = float(s) - return n - except ValueError: - raise GlifLibError("Could not convert %s to an int or float." % s) - - -# -------------------- -# Rapid Value Fetching -# -------------------- - -# base - - -class _DoneParsing(Exception): - pass - - -class _BaseParser: - def __init__(self): - self._elementStack = [] - - def parse(self, text): - from xml.parsers.expat import ParserCreate - - parser = ParserCreate() - parser.StartElementHandler = self.startElementHandler - parser.EndElementHandler = self.endElementHandler - parser.Parse(text) - - def startElementHandler(self, name, attrs): - self._elementStack.append(name) - - def endElementHandler(self, name): - other = self._elementStack.pop(-1) - assert other == name - - -# unicodes - - -def _fetchUnicodes(glif): - """ - Get a list of unicodes listed in glif. - """ - parser = _FetchUnicodesParser() - parser.parse(glif) - return parser.unicodes - - -class _FetchUnicodesParser(_BaseParser): - def __init__(self): - self.unicodes = [] - super().__init__() - - def startElementHandler(self, name, attrs): - if ( - name == "unicode" - and self._elementStack - and self._elementStack[-1] == "glyph" - ): - value = attrs.get("hex") - if value is not None: - try: - value = int(value, 16) - if value not in self.unicodes: - self.unicodes.append(value) - except ValueError: - pass - super().startElementHandler(name, attrs) - - -# image - - -def _fetchImageFileName(glif): - """ - The image file name (if any) from glif. - """ - parser = _FetchImageFileNameParser() - try: - parser.parse(glif) - except _DoneParsing: - pass - return parser.fileName - - -class _FetchImageFileNameParser(_BaseParser): - def __init__(self): - self.fileName = None - super().__init__() - - def startElementHandler(self, name, attrs): - if name == "image" and self._elementStack and self._elementStack[-1] == "glyph": - self.fileName = attrs.get("fileName") - raise _DoneParsing - super().startElementHandler(name, attrs) - - -# component references - - -def _fetchComponentBases(glif): - """ - Get a list of component base glyphs listed in glif. - """ - parser = _FetchComponentBasesParser() - try: - parser.parse(glif) - except _DoneParsing: - pass - return list(parser.bases) - - -class _FetchComponentBasesParser(_BaseParser): - def __init__(self): - self.bases = [] - super().__init__() - - def startElementHandler(self, name, attrs): - if ( - name == "component" - and self._elementStack - and self._elementStack[-1] == "outline" - ): - base = attrs.get("base") - if base is not None: - self.bases.append(base) - super().startElementHandler(name, attrs) - - def endElementHandler(self, name): - if name == "outline": - raise _DoneParsing - super().endElementHandler(name) - - -# -------------- -# GLIF Point Pen -# -------------- - -_transformationInfo = [ - # field name, default value - ("xScale", 1), - ("xyScale", 0), - ("yxScale", 0), - ("yScale", 1), - ("xOffset", 0), - ("yOffset", 0), -] - - -class GLIFPointPen(AbstractPointPen): - - """ - Helper class using the PointPen protocol to write the - part of .glif files. - """ - - def __init__(self, element, formatVersion=None, identifiers=None, validate=True): - if identifiers is None: - identifiers = set() - self.formatVersion = GLIFFormatVersion(formatVersion) - self.identifiers = identifiers - self.outline = element - self.contour = None - self.prevOffCurveCount = 0 - self.prevPointTypes = [] - self.validate = validate - - def beginPath(self, identifier=None, **kwargs): - attrs = OrderedDict() - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - self.contour = etree.SubElement(self.outline, "contour", attrs) - self.prevOffCurveCount = 0 - - def endPath(self): - if self.prevPointTypes and self.prevPointTypes[0] == "move": - if self.validate and self.prevPointTypes[-1] == "offcurve": - raise GlifLibError("open contour has loose offcurve point") - # prevent lxml from writing self-closing tags - if not len(self.contour): - self.contour.text = "\n " - self.contour = None - self.prevPointType = None - self.prevOffCurveCount = 0 - self.prevPointTypes = [] - - def addPoint( - self, pt, segmentType=None, smooth=None, name=None, identifier=None, **kwargs - ): - attrs = OrderedDict() - # coordinates - if pt is not None: - if self.validate: - for coord in pt: - if not isinstance(coord, numberTypes): - raise GlifLibError("coordinates must be int or float") - attrs["x"] = repr(pt[0]) - attrs["y"] = repr(pt[1]) - # segment type - if segmentType == "offcurve": - segmentType = None - if self.validate: - if segmentType == "move" and self.prevPointTypes: - raise GlifLibError( - "move occurs after a point has already been added to the contour." - ) - if ( - segmentType in ("move", "line") - and self.prevPointTypes - and self.prevPointTypes[-1] == "offcurve" - ): - raise GlifLibError("offcurve occurs before %s point." % segmentType) - if segmentType == "curve" and self.prevOffCurveCount > 2: - raise GlifLibError("too many offcurve points before curve point.") - if segmentType is not None: - attrs["type"] = segmentType - else: - segmentType = "offcurve" - if segmentType == "offcurve": - self.prevOffCurveCount += 1 - else: - self.prevOffCurveCount = 0 - self.prevPointTypes.append(segmentType) - # smooth - if smooth: - if self.validate and segmentType == "offcurve": - raise GlifLibError("can't set smooth in an offcurve point.") - attrs["smooth"] = "yes" - # name - if name is not None: - attrs["name"] = name - # identifier - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - etree.SubElement(self.contour, "point", attrs) - - def addComponent(self, glyphName, transformation, identifier=None, **kwargs): - attrs = OrderedDict([("base", glyphName)]) - for (attr, default), value in zip(_transformationInfo, transformation): - if self.validate and not isinstance(value, numberTypes): - raise GlifLibError("transformation values must be int or float") - if value != default: - attrs[attr] = repr(value) - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if self.validate and not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - etree.SubElement(self.outline, "component", attrs) - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/file.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/file.ts deleted file mode 100644 index b277a9b1417eb70877687a754535e171314c1992..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/webworker/file.ts +++ /dev/null @@ -1,49 +0,0 @@ -import path from "path-browserify"; -import type { PyodideInterface } from "pyodide"; - -function ensureParent(pyodide: PyodideInterface, filePath: string): void { - const normalized = path.normalize(filePath); - - const dirPath = path.dirname(normalized); - - const dirNames = dirPath.split("/"); - - const chDirNames: string[] = []; - for (const dirName of dirNames) { - chDirNames.push(dirName); - const dirPath = chDirNames.join("/"); - - if (pyodide.FS.analyzePath(dirPath).exists) { - if (pyodide.FS.isDir(dirPath)) { - throw new Error(`"${dirPath}" already exists and is not a directory.`); - } - continue; - } - - try { - pyodide.FS.mkdir(dirPath); - } catch (err) { - console.error(`Failed to create a directory "${dirPath}"`); - throw err; - } - } -} - -export function writeFileWithParents( - pyodide: PyodideInterface, - filePath: string, - data: string | ArrayBufferView, - opts?: Parameters[2] -): void { - ensureParent(pyodide, filePath); - pyodide.FS.writeFile(filePath, data, opts); -} - -export function renameWithParents( - pyodide: PyodideInterface, - oldPath: string, - newPath: string -): void { - ensureParent(pyodide, newPath); - pyodide.FS.rename(oldPath, newPath); -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/tests/test_npy_pkg_config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/tests/test_npy_pkg_config.py deleted file mode 100644 index b287ebe2e83209fdcf5add161a7af8d988b9d086..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/tests/test_npy_pkg_config.py +++ /dev/null @@ -1,84 +0,0 @@ -import os - -from numpy.distutils.npy_pkg_config import read_config, parse_flags -from numpy.testing import temppath, assert_ - -simple = """\ -[meta] -Name = foo -Description = foo lib -Version = 0.1 - -[default] -cflags = -I/usr/include -libs = -L/usr/lib -""" -simple_d = {'cflags': '-I/usr/include', 'libflags': '-L/usr/lib', - 'version': '0.1', 'name': 'foo'} - -simple_variable = """\ -[meta] -Name = foo -Description = foo lib -Version = 0.1 - -[variables] -prefix = /foo/bar -libdir = ${prefix}/lib -includedir = ${prefix}/include - -[default] -cflags = -I${includedir} -libs = -L${libdir} -""" -simple_variable_d = {'cflags': '-I/foo/bar/include', 'libflags': '-L/foo/bar/lib', - 'version': '0.1', 'name': 'foo'} - -class TestLibraryInfo: - def test_simple(self): - with temppath('foo.ini') as path: - with open(path, 'w') as f: - f.write(simple) - pkg = os.path.splitext(path)[0] - out = read_config(pkg) - - assert_(out.cflags() == simple_d['cflags']) - assert_(out.libs() == simple_d['libflags']) - assert_(out.name == simple_d['name']) - assert_(out.version == simple_d['version']) - - def test_simple_variable(self): - with temppath('foo.ini') as path: - with open(path, 'w') as f: - f.write(simple_variable) - pkg = os.path.splitext(path)[0] - out = read_config(pkg) - - assert_(out.cflags() == simple_variable_d['cflags']) - assert_(out.libs() == simple_variable_d['libflags']) - assert_(out.name == simple_variable_d['name']) - assert_(out.version == simple_variable_d['version']) - out.vars['prefix'] = '/Users/david' - assert_(out.cflags() == '-I/Users/david/include') - -class TestParseFlags: - def test_simple_cflags(self): - d = parse_flags("-I/usr/include") - assert_(d['include_dirs'] == ['/usr/include']) - - d = parse_flags("-I/usr/include -DFOO") - assert_(d['include_dirs'] == ['/usr/include']) - assert_(d['macros'] == ['FOO']) - - d = parse_flags("-I /usr/include -DFOO") - assert_(d['include_dirs'] == ['/usr/include']) - assert_(d['macros'] == ['FOO']) - - def test_simple_lflags(self): - d = parse_flags("-L/usr/lib -lfoo -L/usr/lib -lbar") - assert_(d['library_dirs'] == ['/usr/lib', '/usr/lib']) - assert_(d['libraries'] == ['foo', 'bar']) - - d = parse_flags("-L /usr/lib -lfoo -L/usr/lib -lbar") - assert_(d['library_dirs'] == ['/usr/lib', '/usr/lib']) - assert_(d['libraries'] == ['foo', 'bar']) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/freeze.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/freeze.py deleted file mode 100644 index 5fa6d39b2c7c74635f9570c1e1665d03a45024b2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/freeze.py +++ /dev/null @@ -1,97 +0,0 @@ -import sys -from optparse import Values -from typing import List - -from pip._internal.cli import cmdoptions -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.operations.freeze import freeze -from pip._internal.utils.compat import stdlib_pkgs - -DEV_PKGS = {"pip", "setuptools", "distribute", "wheel"} - - -class FreezeCommand(Command): - """ - Output installed packages in requirements format. - - packages are listed in a case-insensitive sorted order. - """ - - usage = """ - %prog [options]""" - log_streams = ("ext://sys.stderr", "ext://sys.stderr") - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-r", - "--requirement", - dest="requirements", - action="append", - default=[], - metavar="file", - help=( - "Use the order in the given requirements file and its " - "comments when generating output. This option can be " - "used multiple times." - ), - ) - self.cmd_opts.add_option( - "-l", - "--local", - dest="local", - action="store_true", - default=False, - help=( - "If in a virtualenv that has global access, do not output " - "globally-installed packages." - ), - ) - self.cmd_opts.add_option( - "--user", - dest="user", - action="store_true", - default=False, - help="Only output packages installed in user-site.", - ) - self.cmd_opts.add_option(cmdoptions.list_path()) - self.cmd_opts.add_option( - "--all", - dest="freeze_all", - action="store_true", - help=( - "Do not skip these packages in the output:" - " {}".format(", ".join(DEV_PKGS)) - ), - ) - self.cmd_opts.add_option( - "--exclude-editable", - dest="exclude_editable", - action="store_true", - help="Exclude editable package from output.", - ) - self.cmd_opts.add_option(cmdoptions.list_exclude()) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - skip = set(stdlib_pkgs) - if not options.freeze_all: - skip.update(DEV_PKGS) - - if options.excludes: - skip.update(options.excludes) - - cmdoptions.check_list_path_option(options) - - for line in freeze( - requirement=options.requirements, - local_only=options.local, - user_only=options.user, - paths=options.path, - isolated=options.isolated_mode, - skip=skip, - exclude_editable=options.exclude_editable, - ): - sys.stdout.write(line + "\n") - return SUCCESS diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/fields.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/fields.py deleted file mode 100644 index 2a8f76899a6be64faf05e54aca2628ab549e5360..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/fields.py +++ /dev/null @@ -1,1177 +0,0 @@ -"""Defining fields on models.""" -from __future__ import annotations as _annotations - -import dataclasses -import inspect -import sys -import typing -from copy import copy -from dataclasses import Field as DataclassField - -try: - from functools import cached_property # type: ignore -except ImportError: - # python 3.7 - cached_property = None -from typing import Any, ClassVar -from warnings import warn - -import annotated_types -import typing_extensions -from pydantic_core import PydanticUndefined -from typing_extensions import Literal, Unpack - -from . import types -from ._internal import _decorators, _fields, _generics, _internal_dataclass, _repr, _typing_extra, _utils -from .errors import PydanticUserError -from .warnings import PydanticDeprecatedSince20 - -if typing.TYPE_CHECKING: - from ._internal._repr import ReprArgs -else: - # See PyCharm issues https://youtrack.jetbrains.com/issue/PY-21915 - # and https://youtrack.jetbrains.com/issue/PY-51428 - DeprecationWarning = PydanticDeprecatedSince20 - - -_Unset: Any = PydanticUndefined - - -class _FromFieldInfoInputs(typing_extensions.TypedDict, total=False): - """This class exists solely to add type checking for the `**kwargs` in `FieldInfo.from_field`.""" - - annotation: type[Any] | None - default_factory: typing.Callable[[], Any] | None - alias: str | None - alias_priority: int | None - validation_alias: str | AliasPath | AliasChoices | None - serialization_alias: str | None - title: str | None - description: str | None - examples: list[Any] | None - exclude: bool | None - gt: float | None - ge: float | None - lt: float | None - le: float | None - multiple_of: float | None - strict: bool | None - min_length: int | None - max_length: int | None - pattern: str | None - allow_inf_nan: bool | None - max_digits: int | None - decimal_places: int | None - union_mode: Literal['smart', 'left_to_right'] | None - discriminator: str | None - json_schema_extra: dict[str, Any] | typing.Callable[[dict[str, Any]], None] | None - frozen: bool | None - validate_default: bool | None - repr: bool - init_var: bool | None - kw_only: bool | None - - -class _FieldInfoInputs(_FromFieldInfoInputs, total=False): - """This class exists solely to add type checking for the `**kwargs` in `FieldInfo.__init__`.""" - - default: Any - - -class FieldInfo(_repr.Representation): - """This class holds information about a field. - - `FieldInfo` is used for any field definition regardless of whether the [`Field()`][pydantic.fields.Field] - function is explicitly used. - - !!! warning - You generally shouldn't be creating `FieldInfo` directly, you'll only need to use it when accessing - [`BaseModel`][pydantic.main.BaseModel] `.model_fields` internals. - - Attributes: - annotation: The type annotation of the field. - default: The default value of the field. - default_factory: The factory function used to construct the default for the field. - alias: The alias name of the field. - alias_priority: The priority of the field's alias. - validation_alias: The validation alias name of the field. - serialization_alias: The serialization alias name of the field. - title: The title of the field. - description: The description of the field. - examples: List of examples of the field. - exclude: Whether to exclude the field from the model serialization. - discriminator: Field name for discriminating the type in a tagged union. - json_schema_extra: Dictionary of extra JSON schema properties. - frozen: Whether the field is frozen. - validate_default: Whether to validate the default value of the field. - repr: Whether to include the field in representation of the model. - init_var: Whether the field should be included in the constructor of the dataclass. - kw_only: Whether the field should be a keyword-only argument in the constructor of the dataclass. - metadata: List of metadata constraints. - """ - - annotation: type[Any] | None - default: Any - default_factory: typing.Callable[[], Any] | None - alias: str | None - alias_priority: int | None - validation_alias: str | AliasPath | AliasChoices | None - serialization_alias: str | None - title: str | None - description: str | None - examples: list[Any] | None - exclude: bool | None - discriminator: str | None - json_schema_extra: dict[str, Any] | typing.Callable[[dict[str, Any]], None] | None - frozen: bool | None - validate_default: bool | None - repr: bool - init_var: bool | None - kw_only: bool | None - metadata: list[Any] - - __slots__ = ( - 'annotation', - 'default', - 'default_factory', - 'alias', - 'alias_priority', - 'validation_alias', - 'serialization_alias', - 'title', - 'description', - 'examples', - 'exclude', - 'discriminator', - 'json_schema_extra', - 'frozen', - 'validate_default', - 'repr', - 'init_var', - 'kw_only', - 'metadata', - '_attributes_set', - ) - - # used to convert kwargs to metadata/constraints, - # None has a special meaning - these items are collected into a `PydanticGeneralMetadata` - metadata_lookup: ClassVar[dict[str, typing.Callable[[Any], Any] | None]] = { - 'strict': types.Strict, - 'gt': annotated_types.Gt, - 'ge': annotated_types.Ge, - 'lt': annotated_types.Lt, - 'le': annotated_types.Le, - 'multiple_of': annotated_types.MultipleOf, - 'min_length': annotated_types.MinLen, - 'max_length': annotated_types.MaxLen, - 'pattern': None, - 'allow_inf_nan': None, - 'max_digits': None, - 'decimal_places': None, - 'union_mode': None, - } - - def __init__(self, **kwargs: Unpack[_FieldInfoInputs]) -> None: - """This class should generally not be initialized directly; instead, use the `pydantic.fields.Field` function - or one of the constructor classmethods. - - See the signature of `pydantic.fields.Field` for more details about the expected arguments. - """ - self._attributes_set = {k: v for k, v in kwargs.items() if v is not _Unset} - kwargs = {k: _DefaultValues.get(k) if v is _Unset else v for k, v in kwargs.items()} # type: ignore - self.annotation, annotation_metadata = self._extract_metadata(kwargs.get('annotation')) - - default = kwargs.pop('default', PydanticUndefined) - if default is Ellipsis: - self.default = PydanticUndefined - else: - self.default = default - - self.default_factory = kwargs.pop('default_factory', None) - - if self.default is not PydanticUndefined and self.default_factory is not None: - raise TypeError('cannot specify both default and default_factory') - - self.title = kwargs.pop('title', None) - self.alias = kwargs.pop('alias', None) - self.validation_alias = kwargs.pop('validation_alias', None) - self.serialization_alias = kwargs.pop('serialization_alias', None) - alias_is_set = any(alias is not None for alias in (self.alias, self.validation_alias, self.serialization_alias)) - self.alias_priority = kwargs.pop('alias_priority', None) or 2 if alias_is_set else None - self.description = kwargs.pop('description', None) - self.examples = kwargs.pop('examples', None) - self.exclude = kwargs.pop('exclude', None) - self.discriminator = kwargs.pop('discriminator', None) - self.repr = kwargs.pop('repr', True) - self.json_schema_extra = kwargs.pop('json_schema_extra', None) - self.validate_default = kwargs.pop('validate_default', None) - self.frozen = kwargs.pop('frozen', None) - # currently only used on dataclasses - self.init_var = kwargs.pop('init_var', None) - self.kw_only = kwargs.pop('kw_only', None) - - self.metadata = self._collect_metadata(kwargs) + annotation_metadata # type: ignore - - @classmethod - def from_field( - cls, default: Any = PydanticUndefined, **kwargs: Unpack[_FromFieldInfoInputs] - ) -> typing_extensions.Self: - """Create a new `FieldInfo` object with the `Field` function. - - Args: - default: The default value for the field. Defaults to Undefined. - **kwargs: Additional arguments dictionary. - - Raises: - TypeError: If 'annotation' is passed as a keyword argument. - - Returns: - A new FieldInfo object with the given parameters. - - Example: - This is how you can create a field with default value like this: - - ```python - import pydantic - - class MyModel(pydantic.BaseModel): - foo: int = pydantic.Field(4) - ``` - """ - if 'annotation' in kwargs: - raise TypeError('"annotation" is not permitted as a Field keyword argument') - return cls(default=default, **kwargs) - - @classmethod - def from_annotation(cls, annotation: type[Any]) -> typing_extensions.Self: - """Creates a `FieldInfo` instance from a bare annotation. - - Args: - annotation: An annotation object. - - Returns: - An instance of the field metadata. - - Example: - This is how you can create a field from a bare annotation like this: - - ```python - import pydantic - - class MyModel(pydantic.BaseModel): - foo: int # <-- like this - ``` - - We also account for the case where the annotation can be an instance of `Annotated` and where - one of the (not first) arguments in `Annotated` are an instance of `FieldInfo`, e.g.: - - ```python - import annotated_types - from typing_extensions import Annotated - - import pydantic - - class MyModel(pydantic.BaseModel): - foo: Annotated[int, annotated_types.Gt(42)] - bar: Annotated[int, pydantic.Field(gt=42)] - ``` - - """ - final = False - if _typing_extra.is_finalvar(annotation): - final = True - if annotation is not typing_extensions.Final: - annotation = typing_extensions.get_args(annotation)[0] - - if _typing_extra.is_annotated(annotation): - first_arg, *extra_args = typing_extensions.get_args(annotation) - if _typing_extra.is_finalvar(first_arg): - final = True - field_info_annotations = [a for a in extra_args if isinstance(a, FieldInfo)] - field_info = cls.merge_field_infos(*field_info_annotations, annotation=first_arg) - if field_info: - new_field_info = copy(field_info) - new_field_info.annotation = first_arg - new_field_info.frozen = final or field_info.frozen - metadata: list[Any] = [] - for a in extra_args: - if not isinstance(a, FieldInfo): - metadata.append(a) - else: - metadata.extend(a.metadata) - new_field_info.metadata = metadata - return new_field_info - - return cls(annotation=annotation, frozen=final or None) - - @classmethod - def from_annotated_attribute(cls, annotation: type[Any], default: Any) -> typing_extensions.Self: - """Create `FieldInfo` from an annotation with a default value. - - Args: - annotation: The type annotation of the field. - default: The default value of the field. - - Returns: - A field object with the passed values. - - Example: - ```python - import annotated_types - from typing_extensions import Annotated - - import pydantic - - class MyModel(pydantic.BaseModel): - foo: int = 4 # <-- like this - bar: Annotated[int, annotated_types.Gt(4)] = 4 # <-- or this - spam: Annotated[int, pydantic.Field(gt=4)] = 4 # <-- or this - ``` - """ - final = False - if _typing_extra.is_finalvar(annotation): - final = True - if annotation is not typing_extensions.Final: - annotation = typing_extensions.get_args(annotation)[0] - - if isinstance(default, cls): - default.annotation, annotation_metadata = cls._extract_metadata(annotation) - default.metadata += annotation_metadata - default = default.merge_field_infos( - *[x for x in annotation_metadata if isinstance(x, cls)], default, annotation=default.annotation - ) - default.frozen = final or default.frozen - return default - elif isinstance(default, dataclasses.Field): - init_var = False - if annotation is dataclasses.InitVar: - if sys.version_info < (3, 8): - raise RuntimeError('InitVar is not supported in Python 3.7 as type information is lost') - - init_var = True - annotation = Any - elif isinstance(annotation, dataclasses.InitVar): - init_var = True - annotation = annotation.type - pydantic_field = cls._from_dataclass_field(default) - pydantic_field.annotation, annotation_metadata = cls._extract_metadata(annotation) - pydantic_field.metadata += annotation_metadata - pydantic_field = pydantic_field.merge_field_infos( - *[x for x in annotation_metadata if isinstance(x, cls)], - pydantic_field, - annotation=pydantic_field.annotation, - ) - pydantic_field.frozen = final or pydantic_field.frozen - pydantic_field.init_var = init_var - pydantic_field.kw_only = getattr(default, 'kw_only', None) - return pydantic_field - else: - if _typing_extra.is_annotated(annotation): - first_arg, *extra_args = typing_extensions.get_args(annotation) - field_infos = [a for a in extra_args if isinstance(a, FieldInfo)] - field_info = cls.merge_field_infos(*field_infos, annotation=first_arg, default=default) - metadata: list[Any] = [] - for a in extra_args: - if not isinstance(a, FieldInfo): - metadata.append(a) - else: - metadata.extend(a.metadata) - field_info.metadata = metadata - return field_info - - return cls(annotation=annotation, default=default, frozen=final or None) - - @staticmethod - def merge_field_infos(*field_infos: FieldInfo, **overrides: Any) -> FieldInfo: - """Merge `FieldInfo` instances keeping only explicitly set attributes. - - Later `FieldInfo` instances override earlier ones. - - Returns: - FieldInfo: A merged FieldInfo instance. - """ - flattened_field_infos: list[FieldInfo] = [] - for field_info in field_infos: - flattened_field_infos.extend(x for x in field_info.metadata if isinstance(x, FieldInfo)) - flattened_field_infos.append(field_info) - field_infos = tuple(flattened_field_infos) - if len(field_infos) == 1: - # No merging necessary, but we still need to make a copy and apply the overrides - field_info = copy(field_infos[0]) - field_info._attributes_set.update(overrides) - for k, v in overrides.items(): - setattr(field_info, k, v) - return field_info - - new_kwargs: dict[str, Any] = {} - metadata = {} - for field_info in field_infos: - new_kwargs.update(field_info._attributes_set) - for x in field_info.metadata: - if not isinstance(x, FieldInfo): - metadata[type(x)] = x - new_kwargs.update(overrides) - field_info = FieldInfo(**new_kwargs) - field_info.metadata = list(metadata.values()) - return field_info - - @classmethod - def _from_dataclass_field(cls, dc_field: DataclassField[Any]) -> typing_extensions.Self: - """Return a new `FieldInfo` instance from a `dataclasses.Field` instance. - - Args: - dc_field: The `dataclasses.Field` instance to convert. - - Returns: - The corresponding `FieldInfo` instance. - - Raises: - TypeError: If any of the `FieldInfo` kwargs does not match the `dataclass.Field` kwargs. - """ - default = dc_field.default - if default is dataclasses.MISSING: - default = PydanticUndefined - - if dc_field.default_factory is dataclasses.MISSING: - default_factory: typing.Callable[[], Any] | None = None - else: - default_factory = dc_field.default_factory - - # use the `Field` function so in correct kwargs raise the correct `TypeError` - dc_field_metadata = {k: v for k, v in dc_field.metadata.items() if k in _FIELD_ARG_NAMES} - return Field(default=default, default_factory=default_factory, repr=dc_field.repr, **dc_field_metadata) - - @classmethod - def _extract_metadata(cls, annotation: type[Any] | None) -> tuple[type[Any] | None, list[Any]]: - """Tries to extract metadata/constraints from an annotation if it uses `Annotated`. - - Args: - annotation: The type hint annotation for which metadata has to be extracted. - - Returns: - A tuple containing the extracted metadata type and the list of extra arguments. - """ - if annotation is not None: - if _typing_extra.is_annotated(annotation): - first_arg, *extra_args = typing_extensions.get_args(annotation) - return first_arg, list(extra_args) - - return annotation, [] - - @classmethod - def _collect_metadata(cls, kwargs: dict[str, Any]) -> list[Any]: - """Collect annotations from kwargs. - - The return type is actually `annotated_types.BaseMetadata | PydanticMetadata`, - but it gets combined with `list[Any]` from `Annotated[T, ...]`, hence types. - - Args: - kwargs: Keyword arguments passed to the function. - - Returns: - A list of metadata objects - a combination of `annotated_types.BaseMetadata` and - `PydanticMetadata`. - """ - metadata: list[Any] = [] - general_metadata = {} - for key, value in list(kwargs.items()): - try: - marker = cls.metadata_lookup[key] - except KeyError: - continue - - del kwargs[key] - if value is not None: - if marker is None: - general_metadata[key] = value - else: - metadata.append(marker(value)) - if general_metadata: - metadata.append(_fields.PydanticGeneralMetadata(**general_metadata)) - return metadata - - def get_default(self, *, call_default_factory: bool = False) -> Any: - """Get the default value. - - We expose an option for whether to call the default_factory (if present), as calling it may - result in side effects that we want to avoid. However, there are times when it really should - be called (namely, when instantiating a model via `model_construct`). - - Args: - call_default_factory: Whether to call the default_factory or not. Defaults to `False`. - - Returns: - The default value, calling the default factory if requested or `None` if not set. - """ - if self.default_factory is None: - return _utils.smart_deepcopy(self.default) - elif call_default_factory: - return self.default_factory() - else: - return None - - def is_required(self) -> bool: - """Check if the argument is required. - - Returns: - `True` if the argument is required, `False` otherwise. - """ - return self.default is PydanticUndefined and self.default_factory is None - - def rebuild_annotation(self) -> Any: - """Rebuilds the original annotation for use in function signatures. - - If metadata is present, it adds it to the original annotation using an - `AnnotatedAlias`. Otherwise, it returns the original annotation as is. - - Returns: - The rebuilt annotation. - """ - if not self.metadata: - return self.annotation - else: - # Annotated arguments must be a tuple - return typing_extensions.Annotated[(self.annotation, *self.metadata)] # type: ignore - - def apply_typevars_map(self, typevars_map: dict[Any, Any] | None, types_namespace: dict[str, Any] | None) -> None: - """Apply a `typevars_map` to the annotation. - - This method is used when analyzing parametrized generic types to replace typevars with their concrete types. - - This method applies the `typevars_map` to the annotation in place. - - Args: - typevars_map: A dictionary mapping type variables to their concrete types. - types_namespace (dict | None): A dictionary containing related types to the annotated type. - - See Also: - pydantic._internal._generics.replace_types is used for replacing the typevars with - their concrete types. - """ - annotation = _typing_extra.eval_type_lenient(self.annotation, types_namespace, None) - self.annotation = _generics.replace_types(annotation, typevars_map) - - def __repr_args__(self) -> ReprArgs: - yield 'annotation', _repr.PlainRepr(_repr.display_as_type(self.annotation)) - yield 'required', self.is_required() - - for s in self.__slots__: - if s == '_attributes_set': - continue - if s == 'annotation': - continue - elif s == 'metadata' and not self.metadata: - continue - elif s == 'repr' and self.repr is True: - continue - if s == 'frozen' and self.frozen is False: - continue - if s == 'validation_alias' and self.validation_alias == self.alias: - continue - if s == 'serialization_alias' and self.serialization_alias == self.alias: - continue - if s == 'default_factory' and self.default_factory is not None: - yield 'default_factory', _repr.PlainRepr(_repr.display_as_type(self.default_factory)) - else: - value = getattr(self, s) - if value is not None and value is not PydanticUndefined: - yield s, value - - -@dataclasses.dataclass(**_internal_dataclass.slots_true) -class AliasPath: - """Usage docs: https://docs.pydantic.dev/2.4/concepts/fields#aliaspath-and-aliaschoices - - A data class used by `validation_alias` as a convenience to create aliases. - - Attributes: - path: A list of string or integer aliases. - """ - - path: list[int | str] - - def __init__(self, first_arg: str, *args: str | int) -> None: - self.path = [first_arg] + list(args) - - def convert_to_aliases(self) -> list[str | int]: - """Converts arguments to a list of string or integer aliases. - - Returns: - The list of aliases. - """ - return self.path - - -@dataclasses.dataclass(**_internal_dataclass.slots_true) -class AliasChoices: - """Usage docs: https://docs.pydantic.dev/2.4/concepts/fields#aliaspath-and-aliaschoices - - A data class used by `validation_alias` as a convenience to create aliases. - - Attributes: - choices: A list containing a string or `AliasPath`. - """ - - choices: list[str | AliasPath] - - def __init__(self, first_choice: str | AliasPath, *choices: str | AliasPath) -> None: - self.choices = [first_choice] + list(choices) - - def convert_to_aliases(self) -> list[list[str | int]]: - """Converts arguments to a list of lists containing string or integer aliases. - - Returns: - The list of aliases. - """ - aliases: list[list[str | int]] = [] - for c in self.choices: - if isinstance(c, AliasPath): - aliases.append(c.convert_to_aliases()) - else: - aliases.append([c]) - return aliases - - -class _EmptyKwargs(typing_extensions.TypedDict): - """This class exists solely to ensure that type checking warns about passing `**extra` in `Field`.""" - - -_DefaultValues = dict( - default=..., - default_factory=None, - alias=None, - alias_priority=None, - validation_alias=None, - serialization_alias=None, - title=None, - description=None, - examples=None, - exclude=None, - discriminator=None, - json_schema_extra=None, - frozen=None, - validate_default=None, - repr=True, - init_var=None, - kw_only=None, - pattern=None, - strict=None, - gt=None, - ge=None, - lt=None, - le=None, - multiple_of=None, - allow_inf_nan=None, - max_digits=None, - decimal_places=None, - min_length=None, - max_length=None, -) - - -def Field( # noqa: C901 - default: Any = PydanticUndefined, - *, - default_factory: typing.Callable[[], Any] | None = _Unset, - alias: str | None = _Unset, - alias_priority: int | None = _Unset, - validation_alias: str | AliasPath | AliasChoices | None = _Unset, - serialization_alias: str | None = _Unset, - title: str | None = _Unset, - description: str | None = _Unset, - examples: list[Any] | None = _Unset, - exclude: bool | None = _Unset, - discriminator: str | None = _Unset, - json_schema_extra: dict[str, Any] | typing.Callable[[dict[str, Any]], None] | None = _Unset, - frozen: bool | None = _Unset, - validate_default: bool | None = _Unset, - repr: bool = _Unset, - init_var: bool | None = _Unset, - kw_only: bool | None = _Unset, - pattern: str | None = _Unset, - strict: bool | None = _Unset, - gt: float | None = _Unset, - ge: float | None = _Unset, - lt: float | None = _Unset, - le: float | None = _Unset, - multiple_of: float | None = _Unset, - allow_inf_nan: bool | None = _Unset, - max_digits: int | None = _Unset, - decimal_places: int | None = _Unset, - min_length: int | None = _Unset, - max_length: int | None = _Unset, - union_mode: Literal['smart', 'left_to_right'] = _Unset, - **extra: Unpack[_EmptyKwargs], -) -> Any: - """Usage docs: https://docs.pydantic.dev/2.4/concepts/fields - - Create a field for objects that can be configured. - - Used to provide extra information about a field, either for the model schema or complex validation. Some arguments - apply only to number fields (`int`, `float`, `Decimal`) and some apply only to `str`. - - Note: - - Any `_Unset` objects will be replaced by the corresponding value defined in the `_DefaultValues` dictionary. If a key for the `_Unset` object is not found in the `_DefaultValues` dictionary, it will default to `None` - - Args: - default: Default value if the field is not set. - default_factory: A callable to generate the default value, such as :func:`~datetime.utcnow`. - alias: An alternative name for the attribute. - alias_priority: Priority of the alias. This affects whether an alias generator is used. - validation_alias: 'Whitelist' validation step. The field will be the single one allowed by the alias or set of - aliases defined. - serialization_alias: 'Blacklist' validation step. The vanilla field will be the single one of the alias' or set - of aliases' fields and all the other fields will be ignored at serialization time. - title: Human-readable title. - description: Human-readable description. - examples: Example values for this field. - exclude: Whether to exclude the field from the model serialization. - discriminator: Field name for discriminating the type in a tagged union. - json_schema_extra: Any additional JSON schema data for the schema property. - frozen: Whether the field is frozen. - validate_default: Run validation that isn't only checking existence of defaults. This can be set to `True` or `False`. If not set, it defaults to `None`. - repr: A boolean indicating whether to include the field in the `__repr__` output. - init_var: Whether the field should be included in the constructor of the dataclass. - kw_only: Whether the field should be a keyword-only argument in the constructor of the dataclass. - strict: If `True`, strict validation is applied to the field. - See [Strict Mode](../concepts/strict_mode.md) for details. - gt: Greater than. If set, value must be greater than this. Only applicable to numbers. - ge: Greater than or equal. If set, value must be greater than or equal to this. Only applicable to numbers. - lt: Less than. If set, value must be less than this. Only applicable to numbers. - le: Less than or equal. If set, value must be less than or equal to this. Only applicable to numbers. - multiple_of: Value must be a multiple of this. Only applicable to numbers. - min_length: Minimum length for strings. - max_length: Maximum length for strings. - pattern: Pattern for strings. - allow_inf_nan: Allow `inf`, `-inf`, `nan`. Only applicable to numbers. - max_digits: Maximum number of allow digits for strings. - decimal_places: Maximum number of decimal places allowed for numbers. - union_mode: The strategy to apply when validating a union. Can be `smart` (the default), or `left_to_right`. - See [Union Mode](standard_library_types.md#union-mode) for details. - extra: Include extra fields used by the JSON schema. - - !!! warning Deprecated - The `extra` kwargs is deprecated. Use `json_schema_extra` instead. - - Returns: - A new [`FieldInfo`][pydantic.fields.FieldInfo], the return annotation is `Any` so `Field` can be used on - type annotated fields without causing a typing error. - """ - # Check deprecated and removed params from V1. This logic should eventually be removed. - const = extra.pop('const', None) # type: ignore - if const is not None: - raise PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs') - - min_items = extra.pop('min_items', None) # type: ignore - if min_items is not None: - warn('`min_items` is deprecated and will be removed, use `min_length` instead', DeprecationWarning) - if min_length in (None, _Unset): - min_length = min_items # type: ignore - - max_items = extra.pop('max_items', None) # type: ignore - if max_items is not None: - warn('`max_items` is deprecated and will be removed, use `max_length` instead', DeprecationWarning) - if max_length in (None, _Unset): - max_length = max_items # type: ignore - - unique_items = extra.pop('unique_items', None) # type: ignore - if unique_items is not None: - raise PydanticUserError( - ( - '`unique_items` is removed, use `Set` instead' - '(this feature is discussed in https://github.com/pydantic/pydantic-core/issues/296)' - ), - code='removed-kwargs', - ) - - allow_mutation = extra.pop('allow_mutation', None) # type: ignore - if allow_mutation is not None: - warn('`allow_mutation` is deprecated and will be removed. use `frozen` instead', DeprecationWarning) - if allow_mutation is False: - frozen = True - - regex = extra.pop('regex', None) # type: ignore - if regex is not None: - raise PydanticUserError('`regex` is removed. use `pattern` instead', code='removed-kwargs') - - if extra: - warn( - 'Using extra keyword arguments on `Field` is deprecated and will be removed.' - ' Use `json_schema_extra` instead.' - f' (Extra keys: {", ".join(k.__repr__() for k in extra.keys())})', - DeprecationWarning, - ) - if not json_schema_extra or json_schema_extra is _Unset: - json_schema_extra = extra # type: ignore - - if ( - validation_alias - and validation_alias is not _Unset - and not isinstance(validation_alias, (str, AliasChoices, AliasPath)) - ): - raise TypeError('Invalid `validation_alias` type. it should be `str`, `AliasChoices`, or `AliasPath`') - - if serialization_alias in (_Unset, None) and isinstance(alias, str): - serialization_alias = alias - - if validation_alias in (_Unset, None): - validation_alias = alias - - include = extra.pop('include', None) # type: ignore - if include is not None: - warn('`include` is deprecated and does nothing. It will be removed, use `exclude` instead', DeprecationWarning) - - return FieldInfo.from_field( - default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - examples=examples, - exclude=exclude, - discriminator=discriminator, - json_schema_extra=json_schema_extra, - frozen=frozen, - pattern=pattern, - validate_default=validate_default, - repr=repr, - init_var=init_var, - kw_only=kw_only, - strict=strict, - gt=gt, - ge=ge, - lt=lt, - le=le, - multiple_of=multiple_of, - min_length=min_length, - max_length=max_length, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - union_mode=union_mode, - ) - - -_FIELD_ARG_NAMES = set(inspect.signature(Field).parameters) -_FIELD_ARG_NAMES.remove('extra') # do not include the varkwargs parameter - - -class ModelPrivateAttr(_repr.Representation): - """A descriptor for private attributes in class models. - - Attributes: - default: The default value of the attribute if not provided. - default_factory: A callable function that generates the default value of the - attribute if not provided. - """ - - __slots__ = 'default', 'default_factory' - - def __init__( - self, default: Any = PydanticUndefined, *, default_factory: typing.Callable[[], Any] | None = None - ) -> None: - self.default = default - self.default_factory = default_factory - - if not typing.TYPE_CHECKING: - # We put `__getattr__` in a non-TYPE_CHECKING block because otherwise, mypy allows arbitrary attribute access - - def __getattr__(self, item: str) -> Any: - """This function improves compatibility with custom descriptors by ensuring delegation happens - as expected when the default value of a private attribute is a descriptor. - """ - if item in {'__get__', '__set__', '__delete__'}: - if hasattr(self.default, item): - return getattr(self.default, item) - raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') - - def __set_name__(self, cls: type[Any], name: str) -> None: - """Preserve `__set_name__` protocol defined in https://peps.python.org/pep-0487.""" - if self.default is PydanticUndefined: - return - if not hasattr(self.default, '__set_name__'): - return - set_name = self.default.__set_name__ - if callable(set_name): - set_name(cls, name) - - def get_default(self) -> Any: - """Retrieve the default value of the object. - - If `self.default_factory` is `None`, the method will return a deep copy of the `self.default` object. - - If `self.default_factory` is not `None`, it will call `self.default_factory` and return the value returned. - - Returns: - The default value of the object. - """ - return _utils.smart_deepcopy(self.default) if self.default_factory is None else self.default_factory() - - def __eq__(self, other: Any) -> bool: - return isinstance(other, self.__class__) and (self.default, self.default_factory) == ( - other.default, - other.default_factory, - ) - - -def PrivateAttr( - default: Any = PydanticUndefined, - *, - default_factory: typing.Callable[[], Any] | None = None, -) -> Any: - """Indicates that attribute is only used internally and never mixed with regular fields. - - Private attributes are not checked by Pydantic, so it's up to you to maintain their accuracy. - - Private attributes are stored in `__private_attributes__` on the model. - - Args: - default: The attribute's default value. Defaults to Undefined. - default_factory: Callable that will be - called when a default value is needed for this attribute. - If both `default` and `default_factory` are set, an error will be raised. - - Returns: - An instance of [`ModelPrivateAttr`][pydantic.fields.ModelPrivateAttr] class. - - Raises: - ValueError: If both `default` and `default_factory` are set. - """ - if default is not PydanticUndefined and default_factory is not None: - raise TypeError('cannot specify both default and default_factory') - - return ModelPrivateAttr( - default, - default_factory=default_factory, - ) - - -@dataclasses.dataclass(**_internal_dataclass.slots_true) -class ComputedFieldInfo: - """A container for data from `@computed_field` so that we can access it while building the pydantic-core schema. - - Attributes: - decorator_repr: A class variable representing the decorator string, '@computed_field'. - wrapped_property: The wrapped computed field property. - return_type: The type of the computed field property's return value. - alias: The alias of the property to be used during encoding and decoding. - alias_priority: priority of the alias. This affects whether an alias generator is used - title: Title of the computed field as in OpenAPI document, should be a short summary. - description: Description of the computed field as in OpenAPI document. - repr: A boolean indicating whether or not to include the field in the __repr__ output. - """ - - decorator_repr: ClassVar[str] = '@computed_field' - wrapped_property: property - return_type: Any - alias: str | None - alias_priority: int | None - title: str | None - description: str | None - repr: bool - - -# this should really be `property[T], cached_proprety[T]` but property is not generic unlike cached_property -# See https://github.com/python/typing/issues/985 and linked issues -PropertyT = typing.TypeVar('PropertyT') - - -@typing.overload -def computed_field( - *, - return_type: Any = PydanticUndefined, - alias: str | None = None, - alias_priority: int | None = None, - title: str | None = None, - description: str | None = None, - repr: bool = True, -) -> typing.Callable[[PropertyT], PropertyT]: - ... - - -@typing.overload -def computed_field(__func: PropertyT) -> PropertyT: - ... - - -def _wrapped_property_is_private(property_: cached_property | property) -> bool: # type: ignore - """Returns true if provided property is private, False otherwise.""" - wrapped_name: str = '' - - if isinstance(property_, property): - wrapped_name = getattr(property_.fget, '__name__', '') - elif isinstance(property_, cached_property): # type: ignore - wrapped_name = getattr(property_.func, '__name__', '') # type: ignore - - return wrapped_name.startswith('_') and not wrapped_name.startswith('__') - - -def computed_field( - __f: PropertyT | None = None, - *, - alias: str | None = None, - alias_priority: int | None = None, - title: str | None = None, - description: str | None = None, - repr: bool | None = None, - return_type: Any = PydanticUndefined, -) -> PropertyT | typing.Callable[[PropertyT], PropertyT]: - """Decorator to include `property` and `cached_property` when serializing models or dataclasses. - - This is useful for fields that are computed from other fields, or for fields that are expensive to compute and should be cached. - - ```py - from pydantic import BaseModel, computed_field - - class Rectangle(BaseModel): - width: int - length: int - - @computed_field - @property - def area(self) -> int: - return self.width * self.length - - print(Rectangle(width=3, length=2).model_dump()) - #> {'width': 3, 'length': 2, 'area': 6} - ``` - - If applied to functions not yet decorated with `@property` or `@cached_property`, the function is - automatically wrapped with `property`. Although this is more concise, you will lose IntelliSense in your IDE, - and confuse static type checkers, thus explicit use of `@property` is recommended. - - !!! warning "Mypy Warning" - Even with the `@property` or `@cached_property` applied to your function before `@computed_field`, - mypy may throw a `Decorated property not supported` error. - See [mypy issue #1362](https://github.com/python/mypy/issues/1362), for more information. - To avoid this error message, add `# type: ignore[misc]` to the `@computed_field` line. - - [pyright](https://github.com/microsoft/pyright) supports `@computed_field` without error. - - ```py - import random - - from pydantic import BaseModel, computed_field - - class Square(BaseModel): - width: float - - @computed_field - def area(self) -> float: # converted to a `property` by `computed_field` - return round(self.width**2, 2) - - @area.setter - def area(self, new_area: float) -> None: - self.width = new_area**0.5 - - @computed_field(alias='the magic number', repr=False) - def random_number(self) -> int: - return random.randint(0, 1_000) - - square = Square(width=1.3) - - # `random_number` does not appear in representation - print(repr(square)) - #> Square(width=1.3, area=1.69) - - print(square.random_number) - #> 3 - - square.area = 4 - - print(square.model_dump_json(by_alias=True)) - #> {"width":2.0,"area":4.0,"the magic number":3} - ``` - - !!! warning "Overriding with `computed_field`" - You can't override a field from a parent class with a `computed_field` in the child class. - `mypy` complains about this behavior if allowed, and `dataclasses` doesn't allow this pattern either. - See the example below: - - ```py - from pydantic import BaseModel, computed_field - - class Parent(BaseModel): - a: str - - try: - - class Child(Parent): - @computed_field - @property - def a(self) -> str: - return 'new a' - - except ValueError as e: - print(repr(e)) - #> ValueError("you can't override a field with a computed field") - ``` - - Private properties decorated with `@computed_field` have `repr=False` by default. - - ```py - from functools import cached_property - - from pydantic import BaseModel, computed_field - - class Model(BaseModel): - foo: int - - @computed_field - @cached_property - def _private_cached_property(self) -> int: - return -self.foo - - @computed_field - @property - def _private_property(self) -> int: - return -self.foo - - m = Model(foo=1) - print(repr(m)) - #> M(foo=1) - ``` - - Args: - __f: the function to wrap. - alias: alias to use when serializing this computed field, only used when `by_alias=True` - alias_priority: priority of the alias. This affects whether an alias generator is used - title: Title to used when including this computed field in JSON Schema, currently unused waiting for #4697 - description: Description to used when including this computed field in JSON Schema, defaults to the functions - docstring, currently unused waiting for #4697 - repr: whether to include this computed field in model repr. - Default is `False` for private properties and `True` for public properties. - return_type: optional return for serialization logic to expect when serializing to JSON, if included - this must be correct, otherwise a `TypeError` is raised. - If you don't include a return type Any is used, which does runtime introspection to handle arbitrary - objects. - - Returns: - A proxy wrapper for the property. - """ - - def dec(f: Any) -> Any: - nonlocal description, return_type, alias_priority - unwrapped = _decorators.unwrap_wrapped_function(f) - if description is None and unwrapped.__doc__: - description = inspect.cleandoc(unwrapped.__doc__) - - # if the function isn't already decorated with `@property` (or another descriptor), then we wrap it now - f = _decorators.ensure_property(f) - alias_priority = (alias_priority or 2) if alias is not None else None - - if repr is None: - repr_: bool = False if _wrapped_property_is_private(property_=f) else True - else: - repr_ = repr - - dec_info = ComputedFieldInfo(f, return_type, alias, alias_priority, title, description, repr_) - return _decorators.PydanticDescriptorProxy(f, dec_info) - - if __f is None: - return dec - else: - return dec(__f) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/staroffice.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/staroffice.py deleted file mode 100644 index 0f6cbaeb29d5aafc320269f0c9df41ddd356c2b4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/staroffice.py +++ /dev/null @@ -1,26 +0,0 @@ -""" - pygments.styles.staroffice - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Style similar to StarOffice style, also in OpenOffice and LibreOffice. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Comment, Error, Literal, Name, Token - - -class StarofficeStyle(Style): - """ - Style similar to StarOffice style, also in OpenOffice and LibreOffice. - """ - - styles = { - Token: '#000080', # Blue - Comment: '#696969', # DimGray - Error: '#800000', # Maroon - Literal: '#EE0000', # Red - Name: '#008000', # Green - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_request_methods.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_request_methods.py deleted file mode 100644 index 1d0f3465adf51558bd3b5111aad11fd4fc189433..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_request_methods.py +++ /dev/null @@ -1,217 +0,0 @@ -from __future__ import annotations - -import json as _json -import typing -from urllib.parse import urlencode - -from ._base_connection import _TYPE_BODY -from ._collections import HTTPHeaderDict -from .filepost import _TYPE_FIELDS, encode_multipart_formdata -from .response import BaseHTTPResponse - -__all__ = ["RequestMethods"] - -_TYPE_ENCODE_URL_FIELDS = typing.Union[ - typing.Sequence[typing.Tuple[str, typing.Union[str, bytes]]], - typing.Mapping[str, typing.Union[str, bytes]], -] - - -class RequestMethods: - """ - Convenience mixin for classes who implement a :meth:`urlopen` method, such - as :class:`urllib3.HTTPConnectionPool` and - :class:`urllib3.PoolManager`. - - Provides behavior for making common types of HTTP request methods and - decides which type of request field encoding to use. - - Specifically, - - :meth:`.request_encode_url` is for sending requests whose fields are - encoded in the URL (such as GET, HEAD, DELETE). - - :meth:`.request_encode_body` is for sending requests whose fields are - encoded in the *body* of the request using multipart or www-form-urlencoded - (such as for POST, PUT, PATCH). - - :meth:`.request` is for making any kind of request, it will look up the - appropriate encoding format and use one of the above two methods to make - the request. - - Initializer parameters: - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - """ - - _encode_url_methods = {"DELETE", "GET", "HEAD", "OPTIONS"} - - def __init__(self, headers: typing.Mapping[str, str] | None = None) -> None: - self.headers = headers or {} - - def urlopen( - self, - method: str, - url: str, - body: _TYPE_BODY | None = None, - headers: typing.Mapping[str, str] | None = None, - encode_multipart: bool = True, - multipart_boundary: str | None = None, - **kw: typing.Any, - ) -> BaseHTTPResponse: # Abstract - raise NotImplementedError( - "Classes extending RequestMethods must implement " - "their own ``urlopen`` method." - ) - - def request( - self, - method: str, - url: str, - body: _TYPE_BODY | None = None, - fields: _TYPE_FIELDS | None = None, - headers: typing.Mapping[str, str] | None = None, - json: typing.Any | None = None, - **urlopen_kw: typing.Any, - ) -> BaseHTTPResponse: - """ - Make a request using :meth:`urlopen` with the appropriate encoding of - ``fields`` based on the ``method`` used. - - This is a convenience method that requires the least amount of manual - effort. It can be used in most situations, while still having the - option to drop down to more specific methods when necessary, such as - :meth:`request_encode_url`, :meth:`request_encode_body`, - or even the lowest level :meth:`urlopen`. - """ - method = method.upper() - - if json is not None and body is not None: - raise TypeError( - "request got values for both 'body' and 'json' parameters which are mutually exclusive" - ) - - if json is not None: - if headers is None: - headers = self.headers.copy() # type: ignore - if not ("content-type" in map(str.lower, headers.keys())): - headers["Content-Type"] = "application/json" # type: ignore - - body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( - "utf-8" - ) - - if body is not None: - urlopen_kw["body"] = body - - if method in self._encode_url_methods: - return self.request_encode_url( - method, - url, - fields=fields, # type: ignore[arg-type] - headers=headers, - **urlopen_kw, - ) - else: - return self.request_encode_body( - method, url, fields=fields, headers=headers, **urlopen_kw - ) - - def request_encode_url( - self, - method: str, - url: str, - fields: _TYPE_ENCODE_URL_FIELDS | None = None, - headers: typing.Mapping[str, str] | None = None, - **urlopen_kw: str, - ) -> BaseHTTPResponse: - """ - Make a request using :meth:`urlopen` with the ``fields`` encoded in - the url. This is useful for request methods like GET, HEAD, DELETE, etc. - """ - if headers is None: - headers = self.headers - - extra_kw: dict[str, typing.Any] = {"headers": headers} - extra_kw.update(urlopen_kw) - - if fields: - url += "?" + urlencode(fields) - - return self.urlopen(method, url, **extra_kw) - - def request_encode_body( - self, - method: str, - url: str, - fields: _TYPE_FIELDS | None = None, - headers: typing.Mapping[str, str] | None = None, - encode_multipart: bool = True, - multipart_boundary: str | None = None, - **urlopen_kw: str, - ) -> BaseHTTPResponse: - """ - Make a request using :meth:`urlopen` with the ``fields`` encoded in - the body. This is useful for request methods like POST, PUT, PATCH, etc. - - When ``encode_multipart=True`` (default), then - :func:`urllib3.encode_multipart_formdata` is used to encode - the payload with the appropriate content type. Otherwise - :func:`urllib.parse.urlencode` is used with the - 'application/x-www-form-urlencoded' content type. - - Multipart encoding must be used when posting files, and it's reasonably - safe to use it in other times too. However, it may break request - signing, such as with OAuth. - - Supports an optional ``fields`` parameter of key/value strings AND - key/filetuple. A filetuple is a (filename, data, MIME type) tuple where - the MIME type is optional. For example:: - - fields = { - 'foo': 'bar', - 'fakefile': ('foofile.txt', 'contents of foofile'), - 'realfile': ('barfile.txt', open('realfile').read()), - 'typedfile': ('bazfile.bin', open('bazfile').read(), - 'image/jpeg'), - 'nonamefile': 'contents of nonamefile field', - } - - When uploading a file, providing a filename (the first parameter of the - tuple) is optional but recommended to best mimic behavior of browsers. - - Note that if ``headers`` are supplied, the 'Content-Type' header will - be overwritten because it depends on the dynamic random boundary string - which is used to compose the body of the request. The random boundary - string can be explicitly set with the ``multipart_boundary`` parameter. - """ - if headers is None: - headers = self.headers - - extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} - body: bytes | str - - if fields: - if "body" in urlopen_kw: - raise TypeError( - "request got values for both 'fields' and 'body', can only specify one." - ) - - if encode_multipart: - body, content_type = encode_multipart_formdata( - fields, boundary=multipart_boundary - ) - else: - body, content_type = ( - urlencode(fields), # type: ignore[arg-type] - "application/x-www-form-urlencoded", - ) - - extra_kw["body"] = body - extra_kw["headers"].setdefault("Content-Type", content_type) - - extra_kw.update(urlopen_kw) - - return self.urlopen(method, url, **extra_kw) diff --git a/spaces/pyesonekyaw/faceforgerydetection/Weights/README.md b/spaces/pyesonekyaw/faceforgerydetection/Weights/README.md deleted file mode 100644 index 9dd5480ceb6b93bb7f4c73e1ea7bfd156c56e06f..0000000000000000000000000000000000000000 --- a/spaces/pyesonekyaw/faceforgerydetection/Weights/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Faceforgerydetection -emoji: 💩 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pytorch/WaveGlow/README.md b/spaces/pytorch/WaveGlow/README.md deleted file mode 100644 index ece7418683a90f06a406b360cfe2848d802295f9..0000000000000000000000000000000000000000 --- a/spaces/pytorch/WaveGlow/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: WaveGlow -emoji: 🏃 -colorFrom: blue -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Belle Beauty Boutique Free Full Version.md b/spaces/quidiaMuxgu/Expedit-SAM/Belle Beauty Boutique Free Full Version.md deleted file mode 100644 index 9581703a2599fade65e78cc971900352d12e245c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Belle Beauty Boutique Free Full Version.md +++ /dev/null @@ -1,22 +0,0 @@ -
          -

          How to Download Belle Beauty Boutique Free Full Version for PC

          - -

          If you are looking for a fun and relaxing game that lets you run your own beauty salon, you might want to try Belle Beauty Boutique. This game is a time management game that challenges you to help Belle serve her customers with various beauty treatments. You will have to wash, cut, shampoo, and color their hair, as well as apply makeup and accessories. You will also have to deal with different personalities and preferences of your clients, who will gossip, flirt, and tip you depending on your performance.

          - -

          Belle Beauty Boutique is a game that will appeal to anyone who loves beauty and makeup. The game features two modes: story mode and endless mode. In story mode, you will follow Belle's journey as she tries to realize her dream of creating the ultimate beauty salon. You will have to complete different levels with increasing difficulty and unlock new items and upgrades. In endless mode, you will be able to play as long as you can without running out of time or losing customers. You will be able to test your skills and knowledge of beauty products and see how much money you can make.

          -

          Belle Beauty Boutique Free Full Version


          Download Filehttps://geags.com/2uCrsM



          - -

          Belle Beauty Boutique is a game that has colorful and attractive graphics, catchy music, and humorous dialogues. The game is easy to understand and play, but it also requires strategy and speed. You will have to manage your time wisely and keep your customers happy. You will also have to balance your budget and invest in new equipment and staff. The game is suitable for players of all ages and genders.

          - -

          If you want to download Belle Beauty Boutique free full version for PC, you have several options. You can visit the official website of the developer, Iplay, and purchase the game for $19.95. You can also download a free trial version of the game from the same website or from other websites such as Filehippo or CasualGameGuides. The free trial version will let you play the game for 60 minutes before you decide if you want to buy it or not.

          - -

          Another option is to download Belle Beauty Boutique free full version for PC from websites that offer free games or cracked versions of games. However, this option is not recommended as it may expose your computer to viruses, malware, or other security risks. It may also violate the copyright laws or the terms of service of the game developer.

          - -

          Therefore, the best option is to download Belle Beauty Boutique free full version for PC from a reputable website that offers legal downloads of games. One such website is Gamezhero.com, which offers hundreds of free games for PC and mobile devices. You can find Belle Beauty Boutique on Gamezhero.com by searching for its name or browsing through the categories of girl games or time management games. You can play Belle Beauty Boutique online on Gamezhero.com without downloading anything or registering an account. You can also download the game for free by clicking on the download button on the game page.

          - -

          By downloading Belle Beauty Boutique free full version for PC from Gamezhero.com, you will be able to enjoy this fun and addictive game without any hassle or risk. You will be able to help Belle run her beauty salon and make her customers look fabulous. You will also be able to improve your time management skills and learn more about beauty products and trends.

          -

          - -

          So what are you waiting for? Download Belle Beauty Boutique free full version for PC from Gamezhero.com today and start your own beauty adventure!

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Control Systems Book By Ganesh Rao Pdf Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Control Systems Book By Ganesh Rao Pdf Download.md deleted file mode 100644 index f5d3859a3c2d85c8e23b991d784e4d078481f722..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Control Systems Book By Ganesh Rao Pdf Download.md +++ /dev/null @@ -1,24 +0,0 @@ -

          control systems book by ganesh rao pdf download


          Download Zip 🆓 https://geags.com/2uCs34



          - -iphone - -The real controls book by ganesh rao pdf iphone - -In this book, weve captured in depth all of the knowledge weve gained from our two decades of applying computer science theory to our own software. There are four key sections: An introduction to Python basics - What is Python and how does it fit into the world of software development? - -And, why is it a useful tool? We explore Python syntax, language features, how it works, and how it fits into the modern world of software development. These core Python fundamentals also support the understanding of the other sections. Cython gives you full power over your computer, but you can think of it as a translation tool. - -The result is a language that is extremely high level. Cython and Pyrex are both Microsoft language extensions, meaning you can use them with. The creation of Cython was motivated by Python developers reporting that they wished they could write their code more quickly, and that the current Python build cycle was too slow. - -Open up System Preferences and check under the Displays menu. Click Advanced in the Display Preferences. In the Display Settings menu, click Detect Displays. Click No Detect Display on startup. Next, click Detect Display on Startup. Finally, click Detect Displays Automatically. The Display preferences menu will be unavailable, and the Display Settings menu will be unavailable as well. - -And, we just want to make this as easy as possible for you. Right now, we want to introduce you to the process we use to get your project ready to become a reality. The whole process can take up to two weeks depending on the complexity of your project. - -We do this by providing you with the resources needed to complete the job. In the project itself, you will have access to the code editor, git, a simple build pipeline, and we also provide our own cloud server to deploy your code to. - -Finally, we want to make sure that you get the most out of your CSA-Project. If you are a freelance developer and you have a lot of projects, then we want you to maintain a clear overview of your invoicing. So, we just added the visibility of your projects in your profile. - -With us, you can see all your projects in one place. For each project, you can see a timeline, a detailed description, detailed activity reports, and the invoice. We want to make sure you dont forget anything, so we added the capability to 4fefd39f24
          -
          -
          -

          diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/Dockerfile b/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/Dockerfile deleted file mode 100644 index aaa69a03a573d571cbf373d365f8e6cca9ef3abe..0000000000000000000000000000000000000000 --- a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/Dockerfile +++ /dev/null @@ -1,44 +0,0 @@ -FROM nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 - -ARG DEBIAN_FRONTEND=noninteractive - -ENV PYTHONUNBUFFERED=1 - -RUN apt-get update && apt-get install --no-install-recommends -y \ - build-essential \ - python3.9 \ - python3-pip \ - python3-dev \ - git \ - ffmpeg \ - google-perftools \ - && apt-get clean && rm -rf /var/lib/apt/lists/* - - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH \ - PYTHONPATH=$HOME/app \ - PYTHONUNBUFFERED=1 \ - SYSTEM=spaces - -RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 - -# CMD ["uvicorn", "app-img2img:app", "--host", "0.0.0.0", "--port", "7860"] -CMD ["uvicorn", "app-txt2img:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/scripts/generate_sketch_data.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/scripts/generate_sketch_data.py deleted file mode 100644 index a13acf949bf2efb3449f13922b7489e5c06880a3..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/scripts/generate_sketch_data.py +++ /dev/null @@ -1,62 +0,0 @@ -from torchvision import transforms -from torchvision.utils import save_image -from torch.utils.serialization import load_lua -import os -import cv2 -import numpy as np - -""" -NOTE!: Must have torch==0.4.1 and torchvision==0.2.1 -The sketch simplification model (sketch_gan.t7) from Simo Serra et al. can be downloaded from their official implementation: - https://github.com/bobbens/sketch_simplification -""" - - -def sobel(img): - opImgx = cv2.Sobel(img, cv2.CV_8U, 0, 1, ksize=3) - opImgy = cv2.Sobel(img, cv2.CV_8U, 1, 0, ksize=3) - return cv2.bitwise_or(opImgx, opImgy) - - -def sketch(frame): - frame = cv2.GaussianBlur(frame, (3, 3), 0) - invImg = 255 - frame - edgImg0 = sobel(frame) - edgImg1 = sobel(invImg) - edgImg = cv2.addWeighted(edgImg0, 0.75, edgImg1, 0.75, 0) - opImg = 255 - edgImg - return opImg - - -def get_sketch_image(image_path): - original = cv2.imread(image_path) - original = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY) - sketch_image = sketch(original) - return sketch_image[:, :, np.newaxis] - - -use_cuda = True - -cache = load_lua("/path/to/sketch_gan.t7") -model = cache.model -immean = cache.mean -imstd = cache.std -model.evaluate() - -data_path = "/path/to/data/imgs" -images = [os.path.join(data_path, f) for f in os.listdir(data_path)] - -output_dir = "/path/to/data/edges" -if not os.path.exists(output_dir): - os.makedirs(output_dir) - -for idx, image_path in enumerate(images): - if idx % 50 == 0: - print("{} out of {}".format(idx, len(images))) - data = get_sketch_image(image_path) - data = ((transforms.ToTensor()(data) - immean) / imstd).unsqueeze(0) - if use_cuda: - pred = model.cuda().forward(data.cuda()).float() - else: - pred = model.forward(data) - save_image(pred[0], os.path.join(output_dir, "{}_edges.jpg".format(image_path.split("/")[-1].split('.')[0]))) diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AKVIS Sketch 14.0.2545 Portable ((EXCLUSIVE)).md b/spaces/raedeXanto/academic-chatgpt-beta/AKVIS Sketch 14.0.2545 Portable ((EXCLUSIVE)).md deleted file mode 100644 index 618c7741458ddc0dc15f1a839e9ba225cf194952..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/AKVIS Sketch 14.0.2545 Portable ((EXCLUSIVE)).md +++ /dev/null @@ -1,27 +0,0 @@ -
          -

          How to Turn Your Photos into Sketches with AKVIS Sketch 14.0.2545 Portable

          -

          Do you want to create stunning sketches from your photos without installing any software? If yes, then you should try AKVIS Sketch 14.0.2545 Portable, a powerful and easy-to-use tool that can convert any image into a realistic pencil drawing or a watercolor painting.

          -

          AKVIS Sketch 14.0.2545 Portable is a standalone version of the popular AKVIS Sketch software that does not require installation and can run from any removable device, such as a USB flash drive or an external hard disk. You can use it on any computer without leaving any traces of your activity.

          -

          AKVIS Sketch 14.0.2545 Portable ((EXCLUSIVE))


          Download Zip ►►► https://tinourl.com/2uL3Kx



          -

          With AKVIS Sketch 14.0.2545 Portable, you can create amazing sketches in just a few clicks. You can adjust the parameters of the sketch effect, such as the stroke direction, the edge intensity, the coloration, the noise level, and the background type. You can also apply various artistic effects, such as charcoal, pastel, hatching, or stippling.

          -

          AKVIS Sketch 14.0.2545 Portable supports various image formats, such as JPEG, PNG, TIFF, BMP, and RAW. You can also save your sketches in PDF format for easy printing or sharing. You can also batch process multiple images at once and create stunning photo albums or collages.

          -

          If you want to try AKVIS Sketch 14.0.2545 Portable for free, you can download it from [^1^]. You can use it for 10 days without any limitations and see how it transforms your photos into sketches.

          - -

          To use AKVIS Sketch 14.0.2545 Portable, you just need to follow these simple steps:

          -
            -
          1. Copy the folder with the program files to your removable device.
          2. -
          3. Run the executable file AKVIS Sketch.exe from the folder.
          4. -
          5. Open an image that you want to convert into a sketch.
          6. -
          7. Select the Classic style and adjust the sketch parameters to your liking.
          8. -
          9. Click on the Run button to apply the effect.
          10. -
          11. Save your sketch or share it online.
          12. -
          -

          You can also use AKVIS Sketch 14.0.2545 Portable as a plugin in your favorite image editor, such as AliveColors, Photoshop, PaintShop Pro, and others[^1^]. To do that, you need to copy the plugin file Sketch_64.8bf from the folder Plugins to the Plugins folder of your image editor. Then you can access the plugin from the Filters menu of your image editor and use it as usual.

          -

          AKVIS Sketch 14.0.2545 Portable is a versatile and user-friendly tool that can help you create amazing sketches from your photos in minutes. Whether you want to make a portrait, a landscape, a still life, or a collage, you can achieve impressive results with AKVIS Sketch 14.0.2545 Portable.

          -

          - -

          One of the main benefits of AKVIS Sketch 14.0.2545 Portable is that it does not require installation and can run from any removable device, such as a USB flash drive or an external hard disk[^1^] [^2^]. This means that you can use it on any computer without leaving any traces of your activity. You can also take it with you wherever you go and create sketches on the fly.

          -

          Another benefit of AKVIS Sketch 14.0.2545 Portable is that it is very easy to use and has a user-friendly interface. You can convert any image into a sketch in just a few clicks, without any complicated settings or technical skills. You can also preview the result before applying the effect and undo any changes if you are not satisfied.

          -

          A third benefit of AKVIS Sketch 14.0.2545 Portable is that it offers a wide range of options and effects to customize your sketches. You can choose between two styles: Classic and Artistic, and adjust the parameters of the sketch effect, such as the stroke direction, the edge intensity, the coloration, the noise level, and the background type. You can also apply various artistic effects, such as charcoal, pastel, hatching, or stippling.

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gold Rush The Game Serial Keygolkes.md b/spaces/raedeXanto/academic-chatgpt-beta/Gold Rush The Game Serial Keygolkes.md deleted file mode 100644 index eb101f1f9e6ce46405e478c1ae5da51259f15bc9..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gold Rush The Game Serial Keygolkes.md +++ /dev/null @@ -1,91 +0,0 @@ - -

          Gold Rush: The Game Serial Keygolkes - A Review

          -

          If you are a fan of gold mining and simulation games, you might have heard of Gold Rush: The Game, a realistic and immersive game that lets you become a gold miner in Alaska. But what are serial keygolkes, and why do some people use them to play this game? In this article, we will explain what these terms mean, why people use them, and what are the risks and drawbacks of doing so. We will also suggest some alternatives to using serial keygolkes for Gold Rush: The Game, so you can enjoy this game without any problems.

          -

          Gold Rush: The Game Serial Keygolkes


          Download Filehttps://tinourl.com/2uKZ0L



          -

          What is Gold Rush: The Game?

          -

          Gold Rush: The Game is a gold mining simulator that was released in 2017 by Code Horizon and PlayWay S.A. It is based on the smash-hit TV series from Discovery Channel, which follows the lives of various gold miners in Alaska. In this game, you can start with nothing but a few spare bucks and work your way up to becoming a millionaire by finding and extracting gold from different claims. You can use a variety of specialist machines such as excavators, drills, loaders, bulldozers, and wash plants to dig deep, explore the world, and process your gold. You can also upgrade your equipment, buy new vehicles, hire workers, and compete with other players online.

          -

          One of the features that makes this game stand out is its dynamic weather and seasons. The game has a realistic day-night cycle and changes in temperature, precipitation, and wind. These factors affect not only the appearance of the environment but also the gameplay. For example, you might have to deal with frozen water pipes, muddy roads, or snowstorms that can slow down your operations. You also have to plan ahead and prepare for winter, when the ground freezes and you have to stop mining until spring. This adds a layer of challenge and realism to the game that makes it more engaging and fun.

          -

          What are serial keygolkes?

          -

          Serial keygolkes are a term that some people use to refer to cracked or pirated game keys. These are keys that are obtained illegally or without paying for them, either by hacking, stealing, or generating them with a software tool. Some people use these keys to activate or unlock games that they have downloaded from torrent sites or other sources. This way, they can play games without paying for them or without waiting for their official release date.

          -

          Another meaning of serial keygolkes is game keys that unlock extra features or DLCs. DLCs are downloadable content that add new content or features to a game, such as new maps, vehicles, missions, modes, etc. Some games require you to purchase these DLCs separately or as part of a bundle or season pass. However, some people use serial keygolkes to access these DLCs without paying for them or without owning the base game.

          -

          A third meaning of serial keygolkes is game keys that are generated by a software tool. These are keys that are not obtained from the official source but rather created by a program that mimics the algorithm or format of the original keys. Some people use these tools to generate random keys that might work for certain games or platforms. However, these tools are not reliable and often produce invalid or duplicate keys that do not work.

          -

          Gold Rush: The Game activation code generator
          -Gold Rush: The Game crack download free
          -Gold Rush: The Game license key no survey
          -Gold Rush: The Game steam key giveaway
          -Gold Rush: The Game product key finder
          -Gold Rush: The Game full version download
          -Gold Rush: The Game keygen online
          -Gold Rush: The Game torrent magnet link
          -Gold Rush: The Game cd key cheap
          -Gold Rush: The Game serial number generator
          -Gold Rush: The Game patch download pc
          -Gold Rush: The Game redeem code ps4
          -Gold Rush: The Game registration key free
          -Gold Rush: The Game crack skidrow
          -Gold Rush: The Game steam key generator
          -Gold Rush: The Game download for free
          -Gold Rush: The Game license key crack
          -Gold Rush: The Game activation key free
          -Gold Rush: The Game product key generator
          -Gold Rush: The Game torrent download pc
          -Gold Rush: The Game cd key generator
          -Gold Rush: The Game serial key free
          -Gold Rush: The Game patch download free
          -Gold Rush: The Game redeem code xbox one
          -Gold Rush: The Game registration key generator
          -Gold Rush: The Game crack reloaded
          -Gold Rush: The Game steam key free
          -Gold Rush: The Game download full version
          -Gold Rush: The Game license key generator online
          -Gold Rush: The Game activation key generator online
          -Gold Rush: The Game product key finder online
          -Gold Rush: The Game torrent download free
          -Gold Rush: The Game cd key free
          -Gold Rush: The Game serial key generator online
          -Gold Rush: The Game patch download pc free
          -Gold Rush: The Game redeem code generator
          -Gold Rush: The Game registration key crack
          -Gold Rush: The Game crack codex
          -Gold Rush: The Game steam key cheap
          -Gold Rush: The Game download pc free full version
          -Gold Rush: The Game license key generator no survey
          -Gold Rush: The Game activation key generator no survey
          -Gold Rush: The Game product key generator no survey
          -Gold Rush: The Game torrent download full version
          -Gold Rush: The Game cd key giveaway
          -Gold Rush: The Game serial key generator no survey
          -Gold Rush: The Game patch download latest version
          -Gold Rush: The Game redeem code free
          -Gold Rush: The Game registration key free download

          -

          Why do people use serial keygolkes for Gold Rush: The Game?

          -

          There are different reasons why some people use serial keygolkes for Gold Rush: The Game. One of them is to save money and avoid paying for the game. Some people might not have enough money to buy the game or might not want to spend money on it. They might think that it is too expensive or not worth it. They might also think that they are entitled to play any game they want for free.

          -

          Another reason is to access the game before its official release date. Some people might be impatient or curious to play the game as soon as possible. They might not want to wait for the official launch date or for their pre-order to arrive. They might also want to avoid spoilers or join the hype train before anyone else.

          -

          A third reason is to enjoy the game without any restrictions or limitations. Some people might want to play the game without having to deal with DRM (digital rights management) systems that prevent piracy but also cause inconvenience or problems for legitimate users. They might also want to play the game offline or on multiple devices without having to log in or verify their account. They might also want to access all the features and DLCs of the game without having to pay extra or own the base game.

          -

          What are the risks and drawbacks of using serial keygolkes for Gold Rush: The Game?

          -

          Using serial keygolkes for Gold Rush: The Game might seem tempting or convenient for some people, but it also comes with many risks and drawbacks. One of them is legal issues and potential lawsuits from the developers or publishers. Using serial keygolkes is considered piracy, which is illegal and unethical. It violates the intellectual property rights of the creators and distributors of the game. It also harms their revenue and reputation. If you are caught using serial keygolkes, you might face legal consequences such as fines, lawsuits, or even jail time.

          -

          Another risk is malware and viruses that can harm your computer or steal your data. Downloading serial keygolkes from unknown sources can expose your computer to malicious software that can infect your system, damage your files, slow down your performance, spy on your activity, steal your personal information, or even take control of your device. These malware can also spread to other devices on your network or online contacts. You might end up losing your data, money, identity, privacy, or security.

          -

          A third drawback is poor performance and compatibility issues with the game. Using serial keygolkes can affect how well the game runs on your computer. You might experience bugs, glitches, crashes, errors, lagging, freezing, stuttering, low quality graphics, sound problems, missing features, etc. You might also have trouble updating or patching the game to fix these issues or improve its performance. You might also face compatibility issues with different platforms (such as Steam), operating systems (such as Windows), hardware (such as graphics cards), drivers (such as DirectX), etc.

          -

          What are the alternatives to using serial keygolkes for Gold Rush: The Game?

          -

          What are the alternatives to using serial keygolkes for Gold Rush: The Game?

          -

          If you want to play Gold Rush: The Game without using serial keygolkes, there are several alternatives that you can choose from. One of them is buying the game from a legitimate source such as Steam or the official website. This is the best way to support the developers and publishers who worked hard to create this game. It also ensures that you get a high-quality product that works properly and safely on your computer. You also get access to all the features and DLCs of the game as well as updates and patches that improve its performance and fix any issues. You can also enjoy the online multiplayer mode and interact with other players who share your passion for gold mining.

          -

          Another alternative is waiting for discounts or sales on the game price. If you think that the game is too expensive or not worth its full price, you can wait for some occasions when the game is offered at a lower price or even for free. For example, you can check Steam for seasonal sales, daily deals, weekly offers, or special promotions. You can also check other platforms or websites that sell games at discounted prices or offer coupons or vouchers. You can also join giveaways or contests that might reward you with a free copy of the game.

          -

          A third alternative is supporting the developers and publishers by purchasing the DLCs or in-game items. If you already own the base game and want to enhance your gaming experience, you can buy some of the DLCs or in-game items that add new content or features to the game. For example, you can buy the Collector's Edition Upgrade that includes a digital artbook, a soundtrack, a wallpaper pack, and an exclusive leader skin. You can also buy the Frankenstein Machinery DLC that adds two new machines: a mobile wash plant and a bucket wheel excavator. You can also buy some in-game items such as gold bars, magnets, hog pans, etc. By doing so, you not only get more fun and value from the game but also show your appreciation and support for the creators.

          -

          Conclusion

          -

          In conclusion, Gold Rush: The Game is a gold mining simulator that lets you become a gold miner in Alaska. It is based on the TV series from Discovery Channel and features realistic graphics, physics, and gameplay. It also has dynamic weather and seasons that affect your mining operations. However, some people use serial keygolkes to play this game without paying for it or to access extra features or DLCs. Serial keygolkes are cracked or pirated game keys that are obtained illegally or generated by a software tool. People use them to save money, access the game early, or enjoy the game without limitations. However, using serial keygolkes has many risks and drawbacks such as legal issues, malware, poor performance, and compatibility issues. Therefore, it is better to avoid using serial keygolkes and choose some alternatives such as buying the game from a legitimate source, waiting for discounts or sales, or supporting the developers by purchasing the DLCs or in-game items.

          -

          FAQs

          -
            -
          • Q: How much does Gold Rush: The Game cost?
          • -
          • A: The game costs $19.99 on Steam and other platforms. However, you can also buy it with some DLCs or bundles at a discounted price.
          • -
          • Q: How can I play Gold Rush: The Game online?
          • -
          • A: You can play online with other players by joining or hosting a multiplayer session. You can also chat with other players and compete with them on leaderboards.
          • -
          • Q: How can I update Gold Rush: The Game?
          • -
          • A: You can update the game automatically through Steam or manually by downloading and installing patches from the official website.
          • -
          • Q: How can I contact Gold Rush: The Game support?
          • -
          • A: You can contact support by sending an email to support@codehorizon.com or by visiting their website at https://codehorizon.com/.
          • -
          • Q: How can I learn more about Gold Rush: The Game?
          • -
          • A: You can learn more about the game by visiting its official website at https://goldrush-thegame.com/, its Steam page at https://store.steampowered.com/app/451340/Gold_Rush_The_Game/, its Facebook page at https://www.facebook.com/GoldRushTheGame/, its Twitter page at https://twitter.com/GoldRushTheGame/, its YouTube channel at https://www.youtube.com/channel/UCG9ZtB7cL0rR5o4Ieh1Ym7w/, or its Discord server at https://discord.gg/goldrushthegame/.
          • -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/modules.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/modules.py deleted file mode 100644 index a192251aaccb036780d77d6c8b538b652a5e24e2..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/modules.py +++ /dev/null @@ -1,276 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -import commons - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-4): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - n_dims = len(x.shape) - mean = torch.mean(x, 1, keepdim=True) - variance = torch.mean((x - mean) ** 2, 1, keepdim=True) - - x = (x - mean) * torch.rsqrt(variance + self.eps) - - shape = [1, -1] + [1] * (n_dims - 2) - x = x * self.gamma.view(*shape) + self.beta.view(*shape) - return x - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - assert hidden_channels % 2 == 0 - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask=None, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - x_in = self.drop(x_in) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - x = (x + res_skip_acts[:, : self.hidden_channels, :]) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ActNorm(nn.Module): - def __init__(self, channels, ddi=False, **kwargs): - super().__init__() - self.channels = channels - self.initialized = not ddi - - self.logs = nn.Parameter(torch.zeros(1, channels, 1)) - self.bias = nn.Parameter(torch.zeros(1, channels, 1)) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - if x_mask is None: - x_mask = torch.ones(x.size(0), 1, x.size(2)).to( - device=x.device, dtype=x.dtype - ) - x_len = torch.sum(x_mask, [1, 2]) - if not self.initialized: - self.initialize(x, x_mask) - self.initialized = True - - if reverse: - z = (x - self.bias) * torch.exp(-self.logs) * x_mask - logdet = None - else: - z = (self.bias + torch.exp(self.logs) * x) * x_mask - logdet = torch.sum(self.logs) * x_len # [b] - - return z, logdet - - def store_inverse(self): - pass - - def set_ddi(self, ddi): - self.initialized = not ddi - - def initialize(self, x, x_mask): - with torch.no_grad(): - denom = torch.sum(x_mask, [0, 2]) - m = torch.sum(x * x_mask, [0, 2]) / denom - m_sq = torch.sum(x * x * x_mask, [0, 2]) / denom - v = m_sq - (m ** 2) - logs = 0.5 * torch.log(torch.clamp_min(v, 1e-6)) - - bias_init = ( - (-m * torch.exp(-logs)).view(*self.bias.shape).to(dtype=self.bias.dtype) - ) - logs_init = (-logs).view(*self.logs.shape).to(dtype=self.logs.dtype) - - self.bias.data.copy_(bias_init) - self.logs.data.copy_(logs_init) - - -class InvConvNear(nn.Module): - def __init__(self, channels, n_split=4, no_jacobian=False, **kwargs): - super().__init__() - assert n_split % 2 == 0 - self.channels = channels - self.n_split = n_split - self.no_jacobian = no_jacobian - - w_init = torch.qr(torch.FloatTensor(self.n_split, self.n_split).normal_())[0] - if torch.det(w_init) < 0: - w_init[:, 0] = -1 * w_init[:, 0] - self.weight = nn.Parameter(w_init) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - b, c, t = x.size() - assert c % self.n_split == 0 - if x_mask is None: - x_mask = 1 - x_len = torch.ones((b,), dtype=x.dtype, device=x.device) * t - else: - x_len = torch.sum(x_mask, [1, 2]) - - x = x.view(b, 2, c // self.n_split, self.n_split // 2, t) - x = ( - x.permute(0, 1, 3, 2, 4) - .contiguous() - .view(b, self.n_split, c // self.n_split, t) - ) - - if reverse: - if hasattr(self, "weight_inv"): - weight = self.weight_inv - else: - weight = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype) - logdet = None - else: - weight = self.weight - if self.no_jacobian: - logdet = 0 - else: - logdet = torch.logdet(self.weight) * (c / self.n_split) * x_len # [b] - - weight = weight.view(self.n_split, self.n_split, 1, 1) - z = F.conv2d(x, weight) - - z = z.view(b, 2, self.n_split // 2, c // self.n_split, t) - z = z.permute(0, 1, 3, 2, 4).contiguous().view(b, c, t) * x_mask - return z, logdet - - def store_inverse(self): - self.weight_inv = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype) diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/transliterate.py b/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/transliterate.py deleted file mode 100644 index 575430562683434cd44fd8d2e77d26dab9ced73b..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/tts_infer/transliterate.py +++ /dev/null @@ -1,919 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -import pandas as pd -import random -import sys -import os -import json -import enum -import traceback -import re - -F_DIR = os.path.dirname(os.environ.get('translit_model_base_path', os.path.realpath(__file__))) - - -class XlitError(enum.Enum): - lang_err = "Unsupported langauge ID requested ;( Please check available languages." - string_err = "String passed is incompatable ;(" - internal_err = "Internal crash ;(" - unknown_err = "Unknown Failure" - loading_err = "Loading failed ;( Check if metadata/paths are correctly configured." - - -##=================== Network ================================================== - - -class Encoder(nn.Module): - def __init__( - self, - input_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - bidirectional=False, - dropout=0, - device="cpu", - ): - super(Encoder, self).__init__() - - self.input_dim = input_dim # src_vocab_sz - self.enc_embed_dim = embed_dim - self.enc_hidden_dim = hidden_dim - self.enc_rnn_type = rnn_type - self.enc_layers = layers - self.enc_directions = 2 if bidirectional else 1 - self.device = device - - self.embedding = nn.Embedding(self.input_dim, self.enc_embed_dim) - - if self.enc_rnn_type == "gru": - self.enc_rnn = nn.GRU( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - elif self.enc_rnn_type == "lstm": - self.enc_rnn = nn.LSTM( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - else: - raise Exception("XlitError: unknown RNN type mentioned") - - def forward(self, x, x_sz, hidden=None): - """ - x_sz: (batch_size, 1) - Unpadded sequence lengths used for pack_pad - """ - batch_sz = x.shape[0] - # x: batch_size, max_length, enc_embed_dim - x = self.embedding(x) - - ## pack the padded data - # x: max_length, batch_size, enc_embed_dim -> for pack_pad - x = x.permute(1, 0, 2) - x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad - - # output: packed_size, batch_size, enc_embed_dim - # hidden: n_layer**num_directions, batch_size, hidden_dim | if LSTM (h_n, c_n) - output, hidden = self.enc_rnn( - x - ) # gru returns hidden state of all timesteps as well as hidden state at last timestep - - ## pad the sequence to the max length in the batch - # output: max_length, batch_size, enc_emb_dim*directions) - output, _ = nn.utils.rnn.pad_packed_sequence(output) - - # output: batch_size, max_length, hidden_dim - output = output.permute(1, 0, 2) - - return output, hidden - - def get_word_embedding(self, x): - """ """ - x_sz = torch.tensor([len(x)]) - x_ = torch.tensor(x).unsqueeze(0).to(dtype=torch.long) - # x: 1, max_length, enc_embed_dim - x = self.embedding(x_) - - ## pack the padded data - # x: max_length, 1, enc_embed_dim -> for pack_pad - x = x.permute(1, 0, 2) - x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad - - # output: packed_size, 1, enc_embed_dim - # hidden: n_layer**num_directions, 1, hidden_dim | if LSTM (h_n, c_n) - output, hidden = self.enc_rnn( - x - ) # gru returns hidden state of all timesteps as well as hidden state at last timestep - - out_embed = hidden[0].squeeze() - - return out_embed - - -class Decoder(nn.Module): - def __init__( - self, - output_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - use_attention=True, - enc_outstate_dim=None, # enc_directions * enc_hidden_dim - dropout=0, - device="cpu", - ): - super(Decoder, self).__init__() - - self.output_dim = output_dim # tgt_vocab_sz - self.dec_hidden_dim = hidden_dim - self.dec_embed_dim = embed_dim - self.dec_rnn_type = rnn_type - self.dec_layers = layers - self.use_attention = use_attention - self.device = device - if self.use_attention: - self.enc_outstate_dim = enc_outstate_dim if enc_outstate_dim else hidden_dim - else: - self.enc_outstate_dim = 0 - - self.embedding = nn.Embedding(self.output_dim, self.dec_embed_dim) - - if self.dec_rnn_type == "gru": - self.dec_rnn = nn.GRU( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - elif self.dec_rnn_type == "lstm": - self.dec_rnn = nn.LSTM( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - else: - raise Exception("XlitError: unknown RNN type mentioned") - - self.fc = nn.Sequential( - nn.Linear(self.dec_hidden_dim, self.dec_embed_dim), - nn.LeakyReLU(), - # nn.Linear(self.dec_embed_dim, self.dec_embed_dim), nn.LeakyReLU(), # removing to reduce size - nn.Linear(self.dec_embed_dim, self.output_dim), - ) - - ##----- Attention ---------- - if self.use_attention: - self.W1 = nn.Linear(self.enc_outstate_dim, self.dec_hidden_dim) - self.W2 = nn.Linear(self.dec_hidden_dim, self.dec_hidden_dim) - self.V = nn.Linear(self.dec_hidden_dim, 1) - - def attention(self, x, hidden, enc_output): - """ - x: (batch_size, 1, dec_embed_dim) -> after Embedding - enc_output: batch_size, max_length, enc_hidden_dim *num_directions - hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - """ - - ## perform addition to calculate the score - - # hidden_with_time_axis: batch_size, 1, hidden_dim - ## hidden_with_time_axis = hidden.permute(1, 0, 2) ## replaced with below 2lines - hidden_with_time_axis = ( - torch.sum(hidden, axis=0) - if self.dec_rnn_type != "lstm" - else torch.sum(hidden[0], axis=0) - ) # h_n - - hidden_with_time_axis = hidden_with_time_axis.unsqueeze(1) - - # score: batch_size, max_length, hidden_dim - score = torch.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)) - - # attention_weights: batch_size, max_length, 1 - # we get 1 at the last axis because we are applying score to self.V - attention_weights = torch.softmax(self.V(score), dim=1) - - # context_vector shape after sum == (batch_size, hidden_dim) - context_vector = attention_weights * enc_output - context_vector = torch.sum(context_vector, dim=1) - # context_vector: batch_size, 1, hidden_dim - context_vector = context_vector.unsqueeze(1) - - # attend_out (batch_size, 1, dec_embed_dim + hidden_size) - attend_out = torch.cat((context_vector, x), -1) - - return attend_out, attention_weights - - def forward(self, x, hidden, enc_output): - """ - x: (batch_size, 1) - enc_output: batch_size, max_length, dec_embed_dim - hidden: n_layer, batch_size, hidden_size | lstm: (h_n, c_n) - """ - if (hidden is None) and (self.use_attention is False): - raise Exception( - "XlitError: No use of a decoder with No attention and No Hidden" - ) - - batch_sz = x.shape[0] - - if hidden is None: - # hidden: n_layers, batch_size, hidden_dim - hid_for_att = torch.zeros( - (self.dec_layers, batch_sz, self.dec_hidden_dim) - ).to(self.device) - elif self.dec_rnn_type == "lstm": - hid_for_att = hidden[1] # c_n - - # x (batch_size, 1, dec_embed_dim) -> after embedding - x = self.embedding(x) - - if self.use_attention: - # x (batch_size, 1, dec_embed_dim + hidden_size) -> after attention - # aw: (batch_size, max_length, 1) - x, aw = self.attention(x, hidden, enc_output) - else: - x, aw = x, 0 - - # passing the concatenated vector to the GRU - # output: (batch_size, n_layers, hidden_size) - # hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - output, hidden = ( - self.dec_rnn(x, hidden) if hidden is not None else self.dec_rnn(x) - ) - - # output :shp: (batch_size * 1, hidden_size) - output = output.view(-1, output.size(2)) - - # output :shp: (batch_size * 1, output_dim) - output = self.fc(output) - - return output, hidden, aw - - -class Seq2Seq(nn.Module): - """ - Class dependency: Encoder, Decoder - """ - - def __init__( - self, encoder, decoder, pass_enc2dec_hid=False, dropout=0, device="cpu" - ): - super(Seq2Seq, self).__init__() - - self.encoder = encoder - self.decoder = decoder - self.device = device - self.pass_enc2dec_hid = pass_enc2dec_hid - _force_en2dec_hid_conv = False - - if self.pass_enc2dec_hid: - assert ( - decoder.dec_hidden_dim == encoder.enc_hidden_dim - ), "Hidden Dimension of encoder and decoder must be same, or unset `pass_enc2dec_hid`" - if decoder.use_attention: - assert ( - decoder.enc_outstate_dim - == encoder.enc_directions * encoder.enc_hidden_dim - ), "Set `enc_out_dim` correctly in decoder" - assert ( - self.pass_enc2dec_hid or decoder.use_attention - ), "No use of a decoder with No attention and No Hidden from Encoder" - - self.use_conv_4_enc2dec_hid = False - if ( - self.pass_enc2dec_hid - and (encoder.enc_directions * encoder.enc_layers != decoder.dec_layers) - ) or _force_en2dec_hid_conv: - if encoder.enc_rnn_type == "lstm" or encoder.enc_rnn_type == "lstm": - raise Exception( - "XlitError: conv for enc2dec_hid not implemented; Change the layer numbers appropriately" - ) - - self.use_conv_4_enc2dec_hid = True - self.enc_hid_1ax = encoder.enc_directions * encoder.enc_layers - self.dec_hid_1ax = decoder.dec_layers - self.e2d_hidden_conv = nn.Conv1d(self.enc_hid_1ax, self.dec_hid_1ax, 1) - - def enc2dec_hidden(self, enc_hidden): - """ - enc_hidden: n_layer, batch_size, hidden_dim*num_directions - TODO: Implement the logic for LSTm bsed model - """ - # hidden: batch_size, enc_layer*num_directions, enc_hidden_dim - hidden = enc_hidden.permute(1, 0, 2).contiguous() - # hidden: batch_size, dec_layers, dec_hidden_dim -> [N,C,Tstep] - hidden = self.e2d_hidden_conv(hidden) - - # hidden: dec_layers, batch_size , dec_hidden_dim - hidden_for_dec = hidden.permute(1, 0, 2).contiguous() - - return hidden_for_dec - - def active_beam_inference(self, src, beam_width=3, max_tgt_sz=50): - """Search based decoding - src: (sequence_len) - """ - - def _avg_score(p_tup): - """Used for Sorting - TODO: Dividing by length of sequence power alpha as hyperparam - """ - return p_tup[0] - - import sys - - batch_size = 1 - start_tok = src[0] - end_tok = src[-1] - src_sz = torch.tensor([len(src)]) - src_ = src.unsqueeze(0) - - # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction) - # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim) - enc_output, enc_hidden = self.encoder(src_, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - if self.use_conv_4_enc2dec_hid: - init_dec_hidden = self.enc2dec_hidden(enc_hidden) - else: - init_dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - init_dec_hidden = None - - # top_pred[][0] = Σ-log_softmax - # top_pred[][1] = sequence torch.tensor shape: (1) - # top_pred[][2] = dec_hidden - top_pred_list = [(0, start_tok.unsqueeze(0), init_dec_hidden)] - - for t in range(max_tgt_sz): - cur_pred_list = [] - - for p_tup in top_pred_list: - if p_tup[1][-1] == end_tok: - cur_pred_list.append(p_tup) - continue - - # dec_hidden: dec_layers, 1, hidden_dim - # dec_output: 1, output_dim - dec_output, dec_hidden, _ = self.decoder( - x=p_tup[1][-1].view(1, 1), # dec_input: (1,1) - hidden=p_tup[2], - enc_output=enc_output, - ) - - ## π{prob} = Σ{log(prob)} -> to prevent diminishing - # dec_output: (1, output_dim) - dec_output = nn.functional.log_softmax(dec_output, dim=1) - # pred_topk.values & pred_topk.indices: (1, beam_width) - pred_topk = torch.topk(dec_output, k=beam_width, dim=1) - - for i in range(beam_width): - sig_logsmx_ = p_tup[0] + pred_topk.values[0][i] - # seq_tensor_ : (seq_len) - seq_tensor_ = torch.cat((p_tup[1], pred_topk.indices[0][i].view(1))) - - cur_pred_list.append((sig_logsmx_, seq_tensor_, dec_hidden)) - - cur_pred_list.sort(key=_avg_score, reverse=True) # Maximized order - top_pred_list = cur_pred_list[:beam_width] - - # check if end_tok of all topk - end_flags_ = [1 if t[1][-1] == end_tok else 0 for t in top_pred_list] - if beam_width == sum(end_flags_): - break - - pred_tnsr_list = [t[1] for t in top_pred_list] - - return pred_tnsr_list - - -##===================== Glyph handlers ======================================= - - -class GlyphStrawboss: - def __init__(self, glyphs="en"): - """list of letters in a language in unicode - lang: ISO Language code - glyphs: json file with script information - """ - if glyphs == "en": - # Smallcase alone - self.glyphs = [chr(alpha) for alpha in range(97, 122 + 1)] - else: - self.dossier = json.load(open(glyphs, encoding="utf-8")) - self.glyphs = self.dossier["glyphs"] - self.numsym_map = self.dossier["numsym_map"] - - self.char2idx = {} - self.idx2char = {} - self._create_index() - - def _create_index(self): - - self.char2idx["_"] = 0 # pad - self.char2idx["$"] = 1 # start - self.char2idx["#"] = 2 # end - self.char2idx["*"] = 3 # Mask - self.char2idx["'"] = 4 # apostrophe U+0027 - self.char2idx["%"] = 5 # unused - self.char2idx["!"] = 6 # unused - - # letter to index mapping - for idx, char in enumerate(self.glyphs): - self.char2idx[char] = idx + 7 # +7 token initially - - # index to letter mapping - for char, idx in self.char2idx.items(): - self.idx2char[idx] = char - - def size(self): - return len(self.char2idx) - - def word2xlitvec(self, word): - """Converts given string of gyphs(word) to vector(numpy) - Also adds tokens for start and end - """ - try: - vec = [self.char2idx["$"]] # start token - for i in list(word): - vec.append(self.char2idx[i]) - vec.append(self.char2idx["#"]) # end token - - vec = np.asarray(vec, dtype=np.int64) - return vec - - except Exception as error: - print("XlitError: In word:", word, "Error Char not in Token:", error) - sys.exit() - - def xlitvec2word(self, vector): - """Converts vector(numpy) to string of glyphs(word)""" - char_list = [] - for i in vector: - char_list.append(self.idx2char[i]) - - word = "".join(char_list).replace("$", "").replace("#", "") # remove tokens - word = word.replace("_", "").replace("*", "") # remove tokens - return word - - -class VocabSanitizer: - def __init__(self, data_file): - """ - data_file: path to file conatining vocabulary list - """ - extension = os.path.splitext(data_file)[-1] - if extension == ".json": - self.vocab_set = set(json.load(open(data_file, encoding="utf-8"))) - elif extension == ".csv": - self.vocab_df = pd.read_csv(data_file).set_index("WORD") - self.vocab_set = set(self.vocab_df.index) - else: - print("XlitError: Only Json/CSV file extension supported") - - def reposition(self, word_list): - """Reorder Words in list""" - new_list = [] - temp_ = word_list.copy() - for v in word_list: - if v in self.vocab_set: - new_list.append(v) - temp_.remove(v) - new_list.extend(temp_) - - return new_list - - -##=============== INSTANTIATION ================================================ - - -class XlitPiston: - """ - For handling prediction & post-processing of transliteration for a single language - Class dependency: Seq2Seq, GlyphStrawboss, VocabSanitizer - Global Variables: F_DIR - """ - - def __init__( - self, - weight_path, - vocab_file, - tglyph_cfg_file, - iglyph_cfg_file="en", - device="cpu", - ): - - self.device = device - self.in_glyph_obj = GlyphStrawboss(iglyph_cfg_file) - self.tgt_glyph_obj = GlyphStrawboss(glyphs=tglyph_cfg_file) - self.voc_sanity = VocabSanitizer(vocab_file) - - self._numsym_set = set( - json.load(open(tglyph_cfg_file, encoding="utf-8"))["numsym_map"].keys() - ) - self._inchar_set = set("abcdefghijklmnopqrstuvwxyz") - self._natscr_set = set().union( - self.tgt_glyph_obj.glyphs, sum(self.tgt_glyph_obj.numsym_map.values(), []) - ) - - ## Model Config Static TODO: add defining in json support - input_dim = self.in_glyph_obj.size() - output_dim = self.tgt_glyph_obj.size() - enc_emb_dim = 300 - dec_emb_dim = 300 - enc_hidden_dim = 512 - dec_hidden_dim = 512 - rnn_type = "lstm" - enc2dec_hid = True - attention = True - enc_layers = 1 - dec_layers = 2 - m_dropout = 0 - enc_bidirect = True - enc_outstate_dim = enc_hidden_dim * (2 if enc_bidirect else 1) - - enc = Encoder( - input_dim=input_dim, - embed_dim=enc_emb_dim, - hidden_dim=enc_hidden_dim, - rnn_type=rnn_type, - layers=enc_layers, - dropout=m_dropout, - device=self.device, - bidirectional=enc_bidirect, - ) - dec = Decoder( - output_dim=output_dim, - embed_dim=dec_emb_dim, - hidden_dim=dec_hidden_dim, - rnn_type=rnn_type, - layers=dec_layers, - dropout=m_dropout, - use_attention=attention, - enc_outstate_dim=enc_outstate_dim, - device=self.device, - ) - self.model = Seq2Seq(enc, dec, pass_enc2dec_hid=enc2dec_hid, device=self.device) - self.model = self.model.to(self.device) - weights = torch.load(weight_path, map_location=torch.device(self.device)) - - self.model.load_state_dict(weights) - self.model.eval() - - def character_model(self, word, beam_width=1): - in_vec = torch.from_numpy(self.in_glyph_obj.word2xlitvec(word)).to(self.device) - ## change to active or passive beam - p_out_list = self.model.active_beam_inference(in_vec, beam_width=beam_width) - p_result = [ - self.tgt_glyph_obj.xlitvec2word(out.cpu().numpy()) for out in p_out_list - ] - - result = self.voc_sanity.reposition(p_result) - - # List type - return result - - def numsym_model(self, seg): - """tgt_glyph_obj.numsym_map[x] returns a list object""" - if len(seg) == 1: - return [seg] + self.tgt_glyph_obj.numsym_map[seg] - - a = [self.tgt_glyph_obj.numsym_map[n][0] for n in seg] - return [seg] + ["".join(a)] - - def _word_segementer(self, sequence): - - sequence = sequence.lower() - accepted = set().union(self._numsym_set, self._inchar_set, self._natscr_set) - # sequence = ''.join([i for i in sequence if i in accepted]) - - segment = [] - idx = 0 - seq_ = list(sequence) - while len(seq_): - # for Number-Symbol - temp = "" - while len(seq_) and seq_[0] in self._numsym_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Target Chars - temp = "" - while len(seq_) and seq_[0] in self._natscr_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Input-Roman Chars - temp = "" - while len(seq_) and seq_[0] in self._inchar_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - temp = "" - while len(seq_) and seq_[0] not in accepted: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - return segment - - def inferencer(self, sequence, beam_width=10): - - seg = self._word_segementer(sequence[:120]) - lit_seg = [] - - p = 0 - while p < len(seg): - if seg[p][0] in self._natscr_set: - lit_seg.append([seg[p]]) - p += 1 - - elif seg[p][0] in self._inchar_set: - lit_seg.append(self.character_model(seg[p], beam_width=beam_width)) - p += 1 - - elif seg[p][0] in self._numsym_set: # num & punc - lit_seg.append(self.numsym_model(seg[p])) - p += 1 - else: - lit_seg.append([seg[p]]) - p += 1 - - ## IF segment less/equal to 2 then return combinotorial, - ## ELSE only return top1 of each result concatenated - if len(lit_seg) == 1: - final_result = lit_seg[0] - - elif len(lit_seg) == 2: - final_result = [""] - for seg in lit_seg: - new_result = [] - for s in seg: - for f in final_result: - new_result.append(f + s) - final_result = new_result - - else: - new_result = [] - for seg in lit_seg: - new_result.append(seg[0]) - final_result = ["".join(new_result)] - - return final_result - - -from collections.abc import Iterable -from pydload import dload -import zipfile - -MODEL_DOWNLOAD_URL_PREFIX = "https://github.com/AI4Bharat/IndianNLP-Transliteration/releases/download/xlit_v0.5.0/" - - -def is_folder_writable(folder): - try: - os.makedirs(folder, exist_ok=True) - tmp_file = os.path.join(folder, ".write_test") - with open(tmp_file, "w") as f: - f.write("Permission Check") - os.remove(tmp_file) - return True - except: - return False - - -def is_directory_writable(path): - if os.name == "nt": - return is_folder_writable(path) - return os.access(path, os.W_OK | os.X_OK) - - -class XlitEngine: - """ - For Managing the top level tasks and applications of transliteration - Global Variables: F_DIR - """ - - def __init__( - self, lang2use="all", config_path="translit_models/default_lineup.json" - ): - - lineup = json.load(open(os.path.join(F_DIR, config_path), encoding="utf-8")) - self.lang_config = {} - if isinstance(lang2use, str): - if lang2use == "all": - self.lang_config = lineup - elif lang2use in lineup: - self.lang_config[lang2use] = lineup[lang2use] - else: - raise Exception( - "XlitError: The entered Langauge code not found. Available are {}".format( - lineup.keys() - ) - ) - - elif isinstance(lang2use, Iterable): - for l in lang2use: - try: - self.lang_config[l] = lineup[l] - except: - print( - "XlitError: Language code {} not found, Skipping...".format(l) - ) - else: - raise Exception( - "XlitError: lang2use must be a list of language codes (or) string of single language code" - ) - - if is_directory_writable(F_DIR): - models_path = os.path.join(F_DIR, "translit_models") - else: - user_home = os.path.expanduser("~") - models_path = os.path.join(user_home, ".AI4Bharat_Xlit_Models") - os.makedirs(models_path, exist_ok=True) - self.download_models(models_path) - - self.langs = {} - self.lang_model = {} - for la in self.lang_config: - try: - print("Loading {}...".format(la)) - self.lang_model[la] = XlitPiston( - weight_path=os.path.join( - models_path, self.lang_config[la]["weight"] - ), - vocab_file=os.path.join(models_path, self.lang_config[la]["vocab"]), - tglyph_cfg_file=os.path.join( - models_path, self.lang_config[la]["script"] - ), - iglyph_cfg_file="en", - ) - self.langs[la] = self.lang_config[la]["name"] - except Exception as error: - print("XlitError: Failure in loading {} \n".format(la), error) - print(XlitError.loading_err.value) - - def download_models(self, models_path): - """ - Download models from GitHub Releases if not exists - """ - for l in self.lang_config: - lang_name = self.lang_config[l]["eng_name"] - lang_model_path = os.path.join(models_path, lang_name) - if not os.path.isdir(lang_model_path): - print("Downloading model for language: %s" % lang_name) - remote_url = MODEL_DOWNLOAD_URL_PREFIX + lang_name + ".zip" - downloaded_zip_path = os.path.join(models_path, lang_name + ".zip") - dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None) - - if not os.path.isfile(downloaded_zip_path): - exit( - f"ERROR: Unable to download model from {remote_url} into {models_path}" - ) - - with zipfile.ZipFile(downloaded_zip_path, "r") as zip_ref: - zip_ref.extractall(models_path) - - if os.path.isdir(lang_model_path): - os.remove(downloaded_zip_path) - else: - exit( - f"ERROR: Unable to find models in {lang_model_path} after download" - ) - return - - def translit_word(self, eng_word, lang_code="default", topk=7, beam_width=10): - if eng_word == "": - return [] - - if lang_code in self.langs: - try: - res_list = self.lang_model[lang_code].inferencer( - eng_word, beam_width=beam_width - ) - return res_list[:topk] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - elif lang_code == "default": - try: - res_dict = {} - for la in self.lang_model: - res = self.lang_model[la].inferencer( - eng_word, beam_width=beam_width - ) - res_dict[la] = res[:topk] - return res_dict - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - def translit_sentence(self, eng_sentence, lang_code="default", beam_width=10): - if eng_sentence == "": - return [] - - if lang_code in self.langs: - try: - out_str = "" - for word in eng_sentence.split(): - res_ = self.lang_model[lang_code].inferencer( - word, beam_width=beam_width - ) - out_str = out_str + res_[0] + " " - return out_str[:-1] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - elif lang_code == "default": - try: - res_dict = {} - for la in self.lang_model: - out_str = "" - for word in eng_sentence.split(): - res_ = self.lang_model[la].inferencer( - word, beam_width=beam_width - ) - out_str = out_str + res_[0] + " " - res_dict[la] = out_str[:-1] - return res_dict - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - -if __name__ == "__main__": - - available_lang = [ - "bn", - "gu", - "hi", - "kn", - "gom", - "mai", - "ml", - "mr", - "pa", - "sd", - "si", - "ta", - "te", - "ur", - ] - - reg = re.compile(r"[a-zA-Z]") - lang = "hi" - engine = XlitEngine( - lang - ) # if you don't specify lang code here, this will give results in all langs available - sent = "Hello World! ABCD क्या हाल है आपका?" - words = [ - engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word - for word in sent.split() - ] # only transliterated en words, leaves rest as it is - updated_sent = " ".join(words) - - print(updated_sent) - - # output : हेलो वर्ल्ड! क्या हाल है आपका? - - # y = engine.translit_sentence("Hello World !")['hi'] - # print(y) diff --git a/spaces/ravithejads/videoques/app.py b/spaces/ravithejads/videoques/app.py deleted file mode 100644 index 3966b89a7880f206ee4ad7c6c6349746823434d0..0000000000000000000000000000000000000000 --- a/spaces/ravithejads/videoques/app.py +++ /dev/null @@ -1,156 +0,0 @@ -from llama_index import Document, GPTListIndex, GPTSimpleVectorIndex -import gradio as gr -import openai -import os -from pytube import YouTube - - -def download_yt_video(ytlink): - - try: - - yt = YouTube(ytlink) - - video = yt.streams.filter(only_audio=True).first() - - out_file = video.download(output_path="./") - - base, ext = os.path.splitext(out_file) - new_file = base + '.mp3' - - os.rename(out_file, new_file) - - return new_file - except Exception as e: - return e - - -def get_transcript(filename): - import requests - import json - - headers = { - 'accept': 'application/json', - 'x-gladia-key': '70ad5f6e-31e6-4acf-8a15-89c166c4cc9f', - # requests won't add a boundary if this header is set when you pass files= - # 'Content-Type': 'multipart/form-data', - } - - files = { - 'audio': (filename, open(filename, 'rb'), 'audio/mpeg'), - 'language': (None, 'english'), - 'language_behaviour': (None, 'manual'), - 'output_format': (None, 'json'), - } - - response = requests.post( - 'https://api.gladia.io/audio/text/audio-transcription/', headers=headers, files=files) - - data = json.loads(response.text) - - result = "" - for dict_ in data['prediction']: - result = result + dict_['transcription'] + " " - - result = ' '.join(result.strip().split()) - - with open(f"{filename[:-4]}.txt", "w") as f: - f.write(result) - - return result - - -def createindex(url, openaikey): - - try: - filename = download_yt_video(url) - - transcript = get_transcript(filename) - - os.remove(filename) - - # Store openai key in environment - os.environ['OPENAI_API_KEY'] = openaikey - - # Create index - index = GPTListIndex([Document(transcript)], chunk_size_limit=2500) - - index_filename = "index.json" - index.save_to_disk(index_filename) - - return "Video processed. Now you can start querying." - except Exception as e: - return e - - -def videoques(query, openaikey): - - # Basic Checks - if not query: - return "Please enter your query." - - # Basic Checks - if not openaikey: - return "Please enter openaikey." - - # Store openai key in environment - os.environ['OPENAI_API_KEY'] = openaikey - - index_name = "index.json" - - index = GPTListIndex.load_from_disk(index_name) - - # Query based on index - response = index.query(query, mode="embedding", similarity_top_k=4) - - return response - - -def cleartext(query, output): - """ - Function to clear text - """ - return ["", ""] - - -with gr.Blocks() as demo: - gr.Markdown( - """ -

          VideoQues

          - - """) - gr.Markdown( - """ - VideoQues answers your queries on any youtube video. - - """) - with gr.Row(): - with gr.Column(): - url = gr.Textbox(lines=1, label="Enter Youtube Video link.") - openaikey = gr.Textbox(lines=1, label="Enter Your OpenAI key.") - submit1_button = gr.Button("Submit") - ans1_output = gr.Textbox(label="Status.") - clear1_button = gr.Button("Clear") - with gr.Column(): - query = gr.Textbox(lines=2, label="Enter Your Query.") - submit2_button = gr.Button("Submit") - ans2_output = gr.Textbox(label="Answer.") - clear2_button = gr.Button("Clear") - - # Submit button for showing YT Video thumbnail. - submit1_button.click(createindex, inputs=[ - url, openaikey], outputs=[ans1_output]) - - # Submit button for submitting query. - submit2_button.click(videoques, inputs=[ - query, openaikey], outputs=[ans2_output]) - - # Clear button for clearing query and answer. - clear1_button.click(cleartext, inputs=[ - url, ans1_output], outputs=[url, ans1_output]) - - # Clear button for clearing query and answer. - clear2_button.click(cleartext, inputs=[query, ans2_output], outputs=[ - query, ans2_output]) - -demo.launch(debug=True) diff --git a/spaces/reach-vb/speech-t5-this-speaker-does-not-exist/README.md b/spaces/reach-vb/speech-t5-this-speaker-does-not-exist/README.md deleted file mode 100644 index b00de1f0412a56568cc8b554a4ee8b880a8b7afb..0000000000000000000000000000000000000000 --- a/spaces/reach-vb/speech-t5-this-speaker-does-not-exist/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SpeechT5 Speech Synthesis Demo -emoji: 👩‍🎤 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Matthijs/speecht5-tts-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Generar El Codigo De Activacion Ecuakaraoke BETTER.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Generar El Codigo De Activacion Ecuakaraoke BETTER.md deleted file mode 100644 index 1130f222451d3b4b3d75f818fc35ed645eadd229..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Generar El Codigo De Activacion Ecuakaraoke BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

          generar el codigo de activacion ecuakaraoke


          Download Ziphttps://urlgoal.com/2uCJ42



          -
          -generar el codigo de activacion ecuakaraoke · A Coursebook on Scientific and Professional Writing for Speech-Language Pathology 1fdad05405
          -
          -
          -

          diff --git a/spaces/renatotn7/teste2/tests/test_gfpgan_arch.py b/spaces/renatotn7/teste2/tests/test_gfpgan_arch.py deleted file mode 100644 index cef14a435aa824a1b7c4baaf2d1fe0a2f6cc4441..0000000000000000000000000000000000000000 --- a/spaces/renatotn7/teste2/tests/test_gfpgan_arch.py +++ /dev/null @@ -1,203 +0,0 @@ -import torch - -from gfpgan.archs.gfpganv1_arch import FacialComponentDiscriminator, GFPGANv1, StyleGAN2GeneratorSFT -from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean, StyleGAN2GeneratorCSFT - - -def test_stylegan2generatorsft(): - """Test arch: StyleGAN2GeneratorSFT.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = StyleGAN2GeneratorSFT( - out_size=32, - num_style_feat=512, - num_mlp=8, - channel_multiplier=1, - resample_kernel=(1, 3, 3, 1), - lr_mlp=0.01, - narrow=1, - sft_half=False).cuda().eval() - style = torch.rand((1, 512), dtype=torch.float32).cuda() - condition1 = torch.rand((1, 512, 8, 8), dtype=torch.float32).cuda() - condition2 = torch.rand((1, 512, 16, 16), dtype=torch.float32).cuda() - condition3 = torch.rand((1, 512, 32, 32), dtype=torch.float32).cuda() - conditions = [condition1, condition1, condition2, condition2, condition3, condition3] - output = net([style], conditions) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with return_latents ----------------------- # - output = net([style], conditions, return_latents=True) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 1 - # check latent - assert output[1][0].shape == (8, 512) - - # -------------------- with randomize_noise = False ----------------------- # - output = net([style], conditions, randomize_noise=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with truncation = 0.5 and mixing----------------------- # - output = net([style, style], conditions, truncation=0.5, truncation_latent=style) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - -def test_gfpganv1(): - """Test arch: GFPGANv1.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = GFPGANv1( - out_size=32, - num_style_feat=512, - channel_multiplier=1, - resample_kernel=(1, 3, 3, 1), - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - lr_mlp=0.01, - input_is_latent=False, - different_w=False, - narrow=1, - sft_half=True).cuda().eval() - img = torch.rand((1, 3, 32, 32), dtype=torch.float32).cuda() - output = net(img) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 3 - # check out_rgbs for intermediate loss - assert output[1][0].shape == (1, 3, 8, 8) - assert output[1][1].shape == (1, 3, 16, 16) - assert output[1][2].shape == (1, 3, 32, 32) - - # -------------------- with different_w = True ----------------------- # - net = GFPGANv1( - out_size=32, - num_style_feat=512, - channel_multiplier=1, - resample_kernel=(1, 3, 3, 1), - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - lr_mlp=0.01, - input_is_latent=False, - different_w=True, - narrow=1, - sft_half=True).cuda().eval() - img = torch.rand((1, 3, 32, 32), dtype=torch.float32).cuda() - output = net(img) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 3 - # check out_rgbs for intermediate loss - assert output[1][0].shape == (1, 3, 8, 8) - assert output[1][1].shape == (1, 3, 16, 16) - assert output[1][2].shape == (1, 3, 32, 32) - - -def test_facialcomponentdiscriminator(): - """Test arch: FacialComponentDiscriminator.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = FacialComponentDiscriminator().cuda().eval() - img = torch.rand((1, 3, 32, 32), dtype=torch.float32).cuda() - output = net(img) - assert len(output) == 2 - assert output[0].shape == (1, 1, 8, 8) - assert output[1] is None - - # -------------------- return intermediate features ----------------------- # - output = net(img, return_feats=True) - assert len(output) == 2 - assert output[0].shape == (1, 1, 8, 8) - assert len(output[1]) == 2 - assert output[1][0].shape == (1, 128, 16, 16) - assert output[1][1].shape == (1, 256, 8, 8) - - -def test_stylegan2generatorcsft(): - """Test arch: StyleGAN2GeneratorCSFT.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = StyleGAN2GeneratorCSFT( - out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=1, sft_half=False).cuda().eval() - style = torch.rand((1, 512), dtype=torch.float32).cuda() - condition1 = torch.rand((1, 512, 8, 8), dtype=torch.float32).cuda() - condition2 = torch.rand((1, 512, 16, 16), dtype=torch.float32).cuda() - condition3 = torch.rand((1, 512, 32, 32), dtype=torch.float32).cuda() - conditions = [condition1, condition1, condition2, condition2, condition3, condition3] - output = net([style], conditions) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with return_latents ----------------------- # - output = net([style], conditions, return_latents=True) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 1 - # check latent - assert output[1][0].shape == (8, 512) - - # -------------------- with randomize_noise = False ----------------------- # - output = net([style], conditions, randomize_noise=False) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - # -------------------- with truncation = 0.5 and mixing----------------------- # - output = net([style, style], conditions, truncation=0.5, truncation_latent=style) - assert output[0].shape == (1, 3, 32, 32) - assert output[1] is None - - -def test_gfpganv1clean(): - """Test arch: GFPGANv1Clean.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = GFPGANv1Clean( - out_size=32, - num_style_feat=512, - channel_multiplier=1, - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - input_is_latent=False, - different_w=False, - narrow=1, - sft_half=True).cuda().eval() - - img = torch.rand((1, 3, 32, 32), dtype=torch.float32).cuda() - output = net(img) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 3 - # check out_rgbs for intermediate loss - assert output[1][0].shape == (1, 3, 8, 8) - assert output[1][1].shape == (1, 3, 16, 16) - assert output[1][2].shape == (1, 3, 32, 32) - - # -------------------- with different_w = True ----------------------- # - net = GFPGANv1Clean( - out_size=32, - num_style_feat=512, - channel_multiplier=1, - decoder_load_path=None, - fix_decoder=True, - # for stylegan decoder - num_mlp=8, - input_is_latent=False, - different_w=True, - narrow=1, - sft_half=True).cuda().eval() - img = torch.rand((1, 3, 32, 32), dtype=torch.float32).cuda() - output = net(img) - assert output[0].shape == (1, 3, 32, 32) - assert len(output[1]) == 3 - # check out_rgbs for intermediate loss - assert output[1][0].shape == (1, 3, 8, 8) - assert output[1][1].shape == (1, 3, 16, 16) - assert output[1][2].shape == (1, 3, 32, 32) diff --git a/spaces/rlancemartin/auto-evaluator/app.py b/spaces/rlancemartin/auto-evaluator/app.py deleted file mode 100644 index 10faec1deebd3e447258331dbac5e26ae5cfc5a5..0000000000000000000000000000000000000000 --- a/spaces/rlancemartin/auto-evaluator/app.py +++ /dev/null @@ -1,491 +0,0 @@ -import os -import json -import time -from typing import List -import faiss -import pypdf -import random -import itertools -import text_utils -import pandas as pd -import altair as alt -import streamlit as st -from io import StringIO -from llama_index import Document -from langchain.llms import Anthropic -from langchain import HuggingFaceHub -from langchain.chains import RetrievalQA -from langchain.vectorstores import FAISS -from llama_index import LangchainEmbedding -from langchain.chat_models import ChatOpenAI -from langchain.retrievers import SVMRetriever -from langchain.chains import QAGenerationChain -from langchain.retrievers import TFIDFRetriever -from langchain.evaluation.qa import QAEvalChain -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.embeddings.openai import OpenAIEmbeddings -from gpt_index import LLMPredictor, ServiceContext, GPTFaissIndex -from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter -from text_utils import GRADE_DOCS_PROMPT, GRADE_ANSWER_PROMPT, GRADE_DOCS_PROMPT_FAST, GRADE_ANSWER_PROMPT_FAST, GRADE_ANSWER_PROMPT_BIAS_CHECK, GRADE_ANSWER_PROMPT_OPENAI - -# Keep dataframe in memory to accumulate experimental results -if "existing_df" not in st.session_state: - summary = pd.DataFrame(columns=['chunk_chars', - 'overlap', - 'split', - 'model', - 'retriever', - 'embedding', - 'num_neighbors', - 'Latency', - 'Retrieval score', - 'Answer score']) - st.session_state.existing_df = summary -else: - summary = st.session_state.existing_df - - -@st.cache_data -def load_docs(files: List) -> str: - """ - Load docs from files - @param files: list of files to load - @return: string of all docs concatenated - """ - - st.info("`Reading doc ...`") - all_text = "" - for file_path in files: - file_extension = os.path.splitext(file_path.name)[1] - if file_extension == ".pdf": - pdf_reader = pypdf.PdfReader(file_path) - file_content = "" - for page in pdf_reader.pages: - file_content += page.extract_text() - file_content = text_utils.clean_pdf_text(file_content) - all_text += file_content - elif file_extension == ".txt": - stringio = StringIO(file_path.getvalue().decode("utf-8")) - file_content = stringio.read() - all_text += file_content - else: - st.warning('Please provide txt or pdf.', icon="⚠️") - return all_text - - -@st.cache_data -def generate_eval(text: str, num_questions: int, chunk: int): - """ - Generate eval set - @param text: text to generate eval set from - @param num_questions: number of questions to generate - @param chunk: chunk size to draw question from in the doc - @return: eval set as JSON list - """ - st.info("`Generating eval set ...`") - n = len(text) - starting_indices = [random.randint(0, n - chunk) for _ in range(num_questions)] - sub_sequences = [text[i:i + chunk] for i in starting_indices] - chain = QAGenerationChain.from_llm(ChatOpenAI(temperature=0)) - eval_set = [] - for i, b in enumerate(sub_sequences): - try: - qa = chain.run(b) - eval_set.append(qa) - except: - st.warning('Error generating question %s.' % str(i + 1), icon="⚠️") - eval_set_full = list(itertools.chain.from_iterable(eval_set)) - return eval_set_full - - -@st.cache_resource -def split_texts(text, chunk_size: int, overlap, split_method: str): - """ - Split text into chunks - @param text: text to split - @param chunk_size: - @param overlap: - @param split_method: - @return: list of str splits - """ - st.info("`Splitting doc ...`") - if split_method == "RecursiveTextSplitter": - text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, - chunk_overlap=overlap) - elif split_method == "CharacterTextSplitter": - text_splitter = CharacterTextSplitter(separator=" ", - chunk_size=chunk_size, - chunk_overlap=overlap) - else: - st.warning("`Split method not recognized. Using RecursiveCharacterTextSplitter`", icon="⚠️") - text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, - chunk_overlap=overlap) - - split_text = text_splitter.split_text(text) - return split_text - - -@st.cache_resource -def make_llm(model_version: str): - """ - Make LLM from model version - @param model_version: model_version - @return: LLN - """ - if (model_version == "gpt-3.5-turbo") or (model_version == "gpt-4"): - chosen_model = ChatOpenAI(model_name=model_version, temperature=0) - elif model_version == "anthropic": - chosen_model = Anthropic(temperature=0) - elif model_version == "flan-t5-xl": - chosen_model = HuggingFaceHub(repo_id="google/flan-t5-xl",model_kwargs={"temperature":0,"max_length":64}) - else: - st.warning("`Model version not recognized. Using gpt-3.5-turbo`", icon="⚠️") - chosen_model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0) - return chosen_model - -@st.cache_resource -def make_retriever(splits, retriever_type, embedding_type, num_neighbors, _llm): - """ - Make document retriever - @param splits: list of str splits - @param retriever_type: retriever type - @param embedding_type: embedding type - @param num_neighbors: number of neighbors for retrieval - @param _llm: model - @return: retriever - """ - st.info("`Making retriever ...`") - # Set embeddings - if embedding_type == "OpenAI": - embedding = OpenAIEmbeddings() - elif embedding_type == "HuggingFace": - embedding = HuggingFaceEmbeddings() - else: - st.warning("`Embedding type not recognized. Using OpenAI`", icon="⚠️") - embedding = OpenAIEmbeddings() - - # Select retriever - if retriever_type == "similarity-search": - try: - vector_store = FAISS.from_texts(splits, embedding) - except ValueError: - st.warning("`Error using OpenAI embeddings (disallowed TikToken token in the text). Using HuggingFace.`", - icon="⚠️") - vector_store = FAISS.from_texts(splits, HuggingFaceEmbeddings()) - retriever_obj = vector_store.as_retriever(k=num_neighbors) - elif retriever_type == "SVM": - retriever_obj = SVMRetriever.from_texts(splits, embedding) - elif retriever_type == "TF-IDF": - retriever_obj = TFIDFRetriever.from_texts(splits) - elif retriever_type == "Llama-Index": - documents = [Document(t, LangchainEmbedding(embedding)) for t in splits] - llm_predictor = LLMPredictor(llm) - context = ServiceContext.from_defaults(chunk_size_limit=512, llm_predictor=llm_predictor) - d = 1536 - faiss_index = faiss.IndexFlatL2(d) - retriever_obj = GPTFaissIndex.from_documents(documents, faiss_index=faiss_index, service_context=context) - else: - st.warning("`Retriever type not recognized. Using SVM`", icon="⚠️") - retriever_obj = SVMRetriever.from_texts(splits, embedding) - return retriever_obj - - -def make_chain(llm, retriever, retriever_type: str) -> RetrievalQA: - """ - Make chain - @param llm: model - @param retriever: retriever - @param retriever_type: retriever type - @return: chain (or return retriever for Llama-Index) - """ - st.info("`Making chain ...`") - if retriever_type == "Llama-Index": - qa = retriever - else: - qa = RetrievalQA.from_chain_type(llm, - chain_type="stuff", - retriever=retriever, - input_key="question") - return qa - - -def grade_model_answer(predicted_dataset: List, predictions: List, grade_answer_prompt: str) -> List: - """ - Grades the distilled answer based on ground truth and model predictions. - @param predicted_dataset: A list of dictionaries containing ground truth questions and answers. - @param predictions: A list of dictionaries containing model predictions for the questions. - @param grade_answer_prompt: The prompt level for the grading. Either "Fast" or "Full". - @return: A list of scores for the distilled answers. - """ - # Grade the distilled answer - st.info("`Grading model answer ...`") - # Set the grading prompt based on the grade_answer_prompt parameter - if grade_answer_prompt == "Fast": - prompt = GRADE_ANSWER_PROMPT_FAST - elif grade_answer_prompt == "Descriptive w/ bias check": - prompt = GRADE_ANSWER_PROMPT_BIAS_CHECK - elif grade_answer_prompt == "OpenAI grading prompt": - prompt = GRADE_ANSWER_PROMPT_OPENAI - else: - prompt = GRADE_ANSWER_PROMPT - - # Create an evaluation chain - eval_chain = QAEvalChain.from_llm( - llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0), - prompt=prompt - ) - - # Evaluate the predictions and ground truth using the evaluation chain - graded_outputs = eval_chain.evaluate( - predicted_dataset, - predictions, - question_key="question", - prediction_key="result" - ) - - return graded_outputs - - -def grade_model_retrieval(gt_dataset: List, predictions: List, grade_docs_prompt: str): - """ - Grades the relevance of retrieved documents based on ground truth and model predictions. - @param gt_dataset: list of dictionaries containing ground truth questions and answers. - @param predictions: list of dictionaries containing model predictions for the questions - @param grade_docs_prompt: prompt level for the grading. Either "Fast" or "Full" - @return: list of scores for the retrieved documents. - """ - # Grade the docs retrieval - st.info("`Grading relevance of retrieved docs ...`") - - # Set the grading prompt based on the grade_docs_prompt parameter - prompt = GRADE_DOCS_PROMPT_FAST if grade_docs_prompt == "Fast" else GRADE_DOCS_PROMPT - - # Create an evaluation chain - eval_chain = QAEvalChain.from_llm( - llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0), - prompt=prompt - ) - - # Evaluate the predictions and ground truth using the evaluation chain - graded_outputs = eval_chain.evaluate( - gt_dataset, - predictions, - question_key="question", - prediction_key="result" - ) - return graded_outputs - - -def run_evaluation(chain, retriever, eval_set, grade_prompt, retriever_type, num_neighbors): - """ - Runs evaluation on a model's performance on a given evaluation dataset. - @param chain: Model chain used for answering questions - @param retriever: Document retriever used for retrieving relevant documents - @param eval_set: List of dictionaries containing questions and corresponding ground truth answers - @param grade_prompt: String prompt used for grading model's performance - @param retriever_type: String specifying the type of retriever used - @param num_neighbors: Number of neighbors to retrieve using the retriever - @return: A tuple of four items: - - answers_grade: A dictionary containing scores for the model's answers. - - retrieval_grade: A dictionary containing scores for the model's document retrieval. - - latencies_list: A list of latencies in seconds for each question answered. - - predictions_list: A list of dictionaries containing the model's predicted answers and relevant documents for each question. - """ - st.info("`Running evaluation ...`") - predictions_list = [] - retrieved_docs = [] - gt_dataset = [] - latencies_list = [] - - for data in eval_set: - - # Get answer and log latency - start_time = time.time() - if retriever_type != "Llama-Index": - predictions_list.append(chain(data)) - elif retriever_type == "Llama-Index": - answer = chain.query(data["question"], similarity_top_k=num_neighbors, response_mode="tree_summarize", - use_async=True) - predictions_list.append({"question": data["question"], "answer": data["answer"], "result": answer.response}) - gt_dataset.append(data) - end_time = time.time() - elapsed_time = end_time - start_time - latencies_list.append(elapsed_time) - - # Retrieve docs - retrieved_doc_text = "" - if retriever_type == "Llama-Index": - for i, doc in enumerate(answer.source_nodes): - retrieved_doc_text += "Doc %s: " % str(i + 1) + doc.node.text + " " - - else: - docs = retriever.get_relevant_documents(data["question"]) - for i, doc in enumerate(docs): - retrieved_doc_text += "Doc %s: " % str(i + 1) + doc.page_content + " " - - retrieved = {"question": data["question"], "answer": data["answer"], "result": retrieved_doc_text} - retrieved_docs.append(retrieved) - - # Grade - answers_grade = grade_model_answer(gt_dataset, predictions_list, grade_prompt) - retrieval_grade = grade_model_retrieval(gt_dataset, retrieved_docs, grade_prompt) - return answers_grade, retrieval_grade, latencies_list, predictions_list - - -# Auth -st.sidebar.image("img/diagnostic.jpg") - -oai_api_key = st.sidebar.text_input("`OpenAI API Key:`", type="password") -ant_api_key = st.sidebar.text_input("`(Optional) Anthropic API Key:`", type="password") -hf_api_key = st.sidebar.text_input("`(Optional) HuggingFace API Token:`", type="password") - -with st.sidebar.form("user_input"): - - num_eval_questions = st.select_slider("`Number of eval questions`", - options=[1, 5, 10, 15, 20], value=5) - - chunk_chars = st.select_slider("`Choose chunk size for splitting`", - options=[500, 750, 1000, 1500, 2000], value=1000) - - overlap = st.select_slider("`Choose overlap for splitting`", - options=[0, 50, 100, 150, 200], value=100) - - split_method = st.radio("`Split method`", - ("RecursiveTextSplitter", - "CharacterTextSplitter"), - index=0) - - model = st.radio("`Choose model`", - ("gpt-3.5-turbo", - "gpt-4", - "anthropic"), - # Error raised by inference API: Model google/flan-t5-xl time out - #"flan-t5-xl"), - index=0) - - retriever_type = st.radio("`Choose retriever`", - ("TF-IDF", - "SVM", - "Llama-Index", - "similarity-search"), - index=3) - - num_neighbors = st.select_slider("`Choose # chunks to retrieve`", - options=[3, 4, 5, 6, 7, 8]) - - embeddings = st.radio("`Choose embeddings`", - ("HuggingFace", - "OpenAI"), - index=1) - - grade_prompt = st.radio("`Grading style prompt`", - ("Fast", - "Descriptive", - "Descriptive w/ bias check", - "OpenAI grading prompt"), - index=0) - - submitted = st.form_submit_button("Submit evaluation") - -st.sidebar.write("`By:` [@RLanceMartin](https://twitter.com/RLanceMartin)") - -# App -st.header("`Auto-evaluator`") -st.info( - "`I am an evaluation tool for question-answering built on LangChain. Given documents, I will auto-generate a question-answer eval " - "set and evaluate using the selected chain settings. Experiments with different configurations are logged. " - "Optionally, provide your own eval set (as a JSON, see docs/karpathy-pod-eval.json for an example). If you don't have acess to GPT-4 or Anthropic, you can use our free hosted app here: https://autoevaluator.langchain.com/`") - -with st.form(key='file_inputs'): - uploaded_file = st.file_uploader("`Please upload a file to evaluate (.txt or .pdf):` ", - type=['pdf', 'txt'], - accept_multiple_files=True) - - uploaded_eval_set = st.file_uploader("`[Optional] Please upload eval set (.json):` ", - type=['json'], - accept_multiple_files=False) - - submitted = st.form_submit_button("Submit files") - -if uploaded_file and oai_api_key: - - os.environ["OPENAI_API_KEY"] = oai_api_key - os.environ["ANTHROPIC_API_KEY"] = ant_api_key - os.environ["HUGGINGFACEHUB_API_TOKEN"] = hf_api_key - - # Load docs - text = load_docs(uploaded_file) - # Generate num_eval_questions questions, each from context of 3k chars randomly selected - if not uploaded_eval_set: - eval_set = generate_eval(text, num_eval_questions, 3000) - else: - eval_set = json.loads(uploaded_eval_set.read()) - # Split text - splits = split_texts(text, chunk_chars, overlap, split_method) - # Make LLM - llm = make_llm(model) - # Make vector DB - retriever = make_retriever(splits, retriever_type, embeddings, num_neighbors, llm) - # Make chain - qa_chain = make_chain(llm, retriever, retriever_type) - # Grade model - graded_answers, graded_retrieval, latency, predictions = run_evaluation(qa_chain, retriever, eval_set, grade_prompt, - retriever_type, num_neighbors) - - # Assemble outputs - d = pd.DataFrame(predictions) - d['answer score'] = [g['text'] for g in graded_answers] - d['docs score'] = [g['text'] for g in graded_retrieval] - d['latency'] = latency - - # Summary statistics - mean_latency = d['latency'].mean() - correct_answer_count = len([text for text in d['answer score'] if "INCORRECT" not in text]) - correct_docs_count = len([text for text in d['docs score'] if "Context is relevant: True" in text]) - percentage_answer = (correct_answer_count / len(graded_answers)) * 100 - percentage_docs = (correct_docs_count / len(graded_retrieval)) * 100 - - st.subheader("`Run Results`") - st.info( - "`I will grade the chain based on: 1/ the relevance of the retrived documents relative to the question and 2/ " - "the summarized answer relative to the ground truth answer. You can see (and change) to prompts used for " - "grading in text_utils`") - st.dataframe(data=d, use_container_width=True) - - # Accumulate results - st.subheader("`Aggregate Results`") - st.info( - "`Retrieval and answer scores are percentage of retrived documents deemed relevant by the LLM grader (" - "relative to the question) and percentage of summarized answers deemed relevant (relative to ground truth " - "answer), respectively. The size of point correponds to the latency (in seconds) of retrieval + answer " - "summarization (larger circle = slower).`") - new_row = pd.DataFrame({'chunk_chars': [chunk_chars], - 'overlap': [overlap], - 'split': [split_method], - 'model': [model], - 'retriever': [retriever_type], - 'embedding': [embeddings], - 'num_neighbors': [num_neighbors], - 'Latency': [mean_latency], - 'Retrieval score': [percentage_docs], - 'Answer score': [percentage_answer]}) - summary = pd.concat([summary, new_row], ignore_index=True) - st.dataframe(data=summary, use_container_width=True) - st.session_state.existing_df = summary - - # Dataframe for visualization - show = summary.reset_index().copy() - show.columns = ['expt number', 'chunk_chars', 'overlap', - 'split', 'model', 'retriever', 'embedding', 'num_neighbors', 'Latency', 'Retrieval score', - 'Answer score'] - show['expt number'] = show['expt number'].apply(lambda x: "Expt #: " + str(x + 1)) - c = alt.Chart(show).mark_circle().encode(x='Retrieval score', - y='Answer score', - size=alt.Size('Latency'), - color='expt number', - tooltip=['expt number', 'Retrieval score', 'Latency', 'Answer score']) - st.altair_chart(c, use_container_width=True, theme="streamlit") - -else: - - st.warning("Please input file and API key(s)!") \ No newline at end of file diff --git a/spaces/ronvolutional/iframe-test/modules/app.py b/spaces/ronvolutional/iframe-test/modules/app.py deleted file mode 100644 index 47844882f87cc97181a32fb38afa7b3c9ba3562b..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/iframe-test/modules/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import requests -import json -from io import BytesIO - -from fastapi import FastAPI -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse, StreamingResponse - -from modules.inference import infer_t5 -from modules.dataset import query_emotion - -# https://huggingface.co/settings/tokens -# https://huggingface.co/spaces/{username}/{space}/settings -API_TOKEN = os.getenv("BIG_GAN_TOKEN") - -app = FastAPI(docs_url=None, redoc_url=None) - -app.mount("/static", StaticFiles(directory="static"), name="static") - - -@app.head("/") -@app.get("/") -def index() -> FileResponse: - return FileResponse(path="static/index.html", media_type="text/html") - - -@app.get("/infer_biggan") -def biggan(input): - output = requests.request( - "POST", - "https://api-inference.huggingface.co/models/osanseviero/BigGAN-deep-128", - headers={"Authorization": f"Bearer {API_TOKEN}"}, - data=json.dumps(input), - ) - - return StreamingResponse(BytesIO(output.content), media_type="image/png") - - -@app.get("/infer_t5") -def t5(input): - output = infer_t5(input) - - return {"output": output} - - -@app.get("/query_emotion") -def emotion(start, end): - output = query_emotion(int(start), int(end)) - - return {"output": output} diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Kasumi Rebirth V3.25 Full Fixed.md b/spaces/rorallitri/biomedical-language-models/logs/Download Kasumi Rebirth V3.25 Full Fixed.md deleted file mode 100644 index 8448d0ffaabbec6c454f767621fe82c271d7e3bd..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Kasumi Rebirth V3.25 Full Fixed.md +++ /dev/null @@ -1,6 +0,0 @@ -

          download kasumi rebirth v3.25 full


          DOWNLOAD » https://tinurll.com/2uzm7x



          -
          - 3cee63e6c2
          -
          -
          -

          diff --git a/spaces/rorallitri/biomedical-language-models/logs/Lolita Cheng 07h.md b/spaces/rorallitri/biomedical-language-models/logs/Lolita Cheng 07h.md deleted file mode 100644 index 9e7336051d5d01c1ce34082f5c0b65e450d30e3c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Lolita Cheng 07h.md +++ /dev/null @@ -1,6 +0,0 @@ -

          lolita cheng 07h


          DOWNLOAD ✺✺✺ https://tinurll.com/2uzorE



          - -TBA…lolita-cheng-02h.avi…TBA…lolita-cheng-03h.avi…TBA…lolita-cheng-04h.avi… ... TBA…lolita-cheng-07h.avi…TBA…lolita-cheng-08h…….. Amara. 1fdad05405
          -
          -
          -

          diff --git a/spaces/rubberboy/stable-diffusion-webui/oh-no.py b/spaces/rubberboy/stable-diffusion-webui/oh-no.py deleted file mode 100644 index e8c0f3bd8d72805b4ee69d4d0fd9133347d00f92..0000000000000000000000000000000000000000 --- a/spaces/rubberboy/stable-diffusion-webui/oh-no.py +++ /dev/null @@ -1,14 +0,0 @@ -import gradio as gr - -block = gr.Blocks() - -def run(): - with block: - gr.Markdown( - """ -

          oh no 😐 something wrong with the 🤗 hugging face servers 😐 hopefully, it will be fixed soon

          - """) - block.launch(server_name="0.0.0.0", server_port=7860) - -if __name__ == "__main__": - run() \ No newline at end of file diff --git a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/PositionwiseFeedForward.py b/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/PositionwiseFeedForward.py deleted file mode 100644 index 1938b392e631c8c9d4179f2b34557a6b531a0174..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/PositionwiseFeedForward.py +++ /dev/null @@ -1,26 +0,0 @@ -# Written by Shigeki Karita, 2019 -# Published under Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) -# Adapted by Florian Lux, 2021 - - -import torch - - -class PositionwiseFeedForward(torch.nn.Module): - """ - Args: - idim (int): Input dimenstion. - hidden_units (int): The number of hidden units. - dropout_rate (float): Dropout rate. - - """ - - def __init__(self, idim, hidden_units, dropout_rate, activation=torch.nn.ReLU()): - super(PositionwiseFeedForward, self).__init__() - self.w_1 = torch.nn.Linear(idim, hidden_units) - self.w_2 = torch.nn.Linear(hidden_units, idim) - self.dropout = torch.nn.Dropout(dropout_rate) - self.activation = activation - - def forward(self, x): - return self.w_2(self.dropout(self.activation(self.w_1(x)))) diff --git a/spaces/scedlatioru/img-to-music/example/Asureid7licensekeygen.md b/spaces/scedlatioru/img-to-music/example/Asureid7licensekeygen.md deleted file mode 100644 index 7d0b91415109d3242823eeed90b23954319a9a8b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Asureid7licensekeygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

          asureid7licensekeygen


          Download File ✪✪✪ https://gohhs.com/2uEyXK



          -
          -Asure Id 7 License Keygen Rating: 7,7/10 275reviews. /**/ Title: Asure Id Express 7 Crack Size: 7.8 Free popular software download incl crack serial nocd .... 1fdad05405
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Delf B1 Pdf Free ((LINK)) Download.md b/spaces/scedlatioru/img-to-music/example/Delf B1 Pdf Free ((LINK)) Download.md deleted file mode 100644 index f29c4e11d12a56934c778cd12e59a7abcefcc846..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Delf B1 Pdf Free ((LINK)) Download.md +++ /dev/null @@ -1,7 +0,0 @@ -
          -


          there are many different ways that malware can infect your computer. many of them can even appear to be harmless. most viruses are spread by people visiting malicious web sites and downloading an infected attachment. the antivirus software will not protect you from an infected attachment or file if you do not have it installed on your computer. if you do not have an antivirus program, you should either purchase one or download a free one. antivirus software will not help you if you do not have it installed on your computer.

          -

          delf b1 pdf free download


          DOWNLOADhttps://gohhs.com/2uEzDx



          -

          why not try our free online toolkit! in less than a minute you will be able to create a questionnaire using our template. the questionnaire can contain a complex range of questions and can be filled in by users via the portal or by themselves on the web. the data is then automatically saved to the database. after that, the survey can be published to the portal for other users to fill in and then processed. finally, you will get an access to the results of the survey, or you can request your own results. our free online toolkit is a simple and convenient way to create your own survey.

          -

          delf b1 is the first step towards the bachelor of science in engineering at the delft university of technology. if you want to get started right away, you can choose to take a delf b1 sample papers free. the sample papers can be downloaded as pdf files or printed for examination purposes. the sample papers are available to all registered delft3d members. delft3d provides a very easy way to examine the delft3d b1. there are two sample papers available; one in dutch and one in english.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Dinamica De Sistemas Y Control Eronini Pdf 48 BEST.md b/spaces/scedlatioru/img-to-music/example/Dinamica De Sistemas Y Control Eronini Pdf 48 BEST.md deleted file mode 100644 index e08e304fc78b312df2cdd160c68aea5ca842193b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Dinamica De Sistemas Y Control Eronini Pdf 48 BEST.md +++ /dev/null @@ -1,16 +0,0 @@ -

          Dinamica De Sistemas Y Control Eronini Pdf 48


          Download File ❤❤❤ https://gohhs.com/2uEA87



          -
          -dinamica de sistemas y control eronini pdf 48 4. -Feb 7, 2019 Leather skirts are in style in the 90s, but with modern details. -And leather mini skirts are good because you can safely wear them. -Buy clothes, shoes and accessories from Stussy. buy in the Jeans Symphony online store. -Stussy original products with discounts up to 70%. -Fashionable. -Women's shoes spring-summer 2019! -Shoes, sandals, ballet shoes, clogs, sneakers, etc. -Free shipping*; Lamoda KZ Summer 2019! -New arrival! -5 Jun 2019 To 8a78ff9644
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/FULL IMacros Enterprise Edition V10.2.2823.md b/spaces/scedlatioru/img-to-music/example/FULL IMacros Enterprise Edition V10.2.2823.md deleted file mode 100644 index b756469f78828dc583d55c7f06c0115d683ce358..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/FULL IMacros Enterprise Edition V10.2.2823.md +++ /dev/null @@ -1,47 +0,0 @@ -
          -

          How to Automate Web Tasks with FULL iMacros Enterprise Edition v10.2.2823

          -

          If you are looking for a powerful and easy-to-use tool to automate web tasks such as filling forms, scraping data, testing websites, and more, then you should consider using FULL iMacros Enterprise Edition v10.2.2823. This is the latest version of the popular software that allows you to record and replay web actions with a single click.

          -

          In this article, we will show you some of the features and benefits of using FULL iMacros Enterprise Edition v10.2.2823, as well as how to download and install it on your computer.

          -

          FULL iMacros Enterprise Edition v10.2.2823


          Downloadhttps://gohhs.com/2uEz3Q



          -

          What is FULL iMacros Enterprise Edition v10.2.2823?

          -

          FULL iMacros Enterprise Edition v10.2.2823 is a software that lets you automate web tasks by recording and replaying them in any web browser. You can use it to perform tasks such as:

          -
            -
          • Fill out web forms with data from a spreadsheet or database
          • -
          • Extract data from web pages and save it to a file or database
          • -
          • Test web applications for functionality, performance, and security
          • -
          • Download files, images, or videos from web pages
          • -
          • Login to websites with a single click
          • -
          • And much more!
          • -
          -

          FULL iMacros Enterprise Edition v10.2.2823 is the most advanced version of the software that offers additional features such as:

          -
            -
          • Support for multiple browsers, including Chrome, Firefox, Internet Explorer, Edge, and Safari
          • -
          • Support for scripting languages such as JavaScript, VBScript, Python, Perl, and more
          • -
          • Support for web automation frameworks such as Selenium WebDriver and Kantu
          • -
          • Support for cloud services such as Amazon Web Services (AWS) and Microsoft Azure
          • -
          • Support for enterprise-level security and encryption
          • -
          • And much more!
          • -
          -

          Why should you use FULL iMacros Enterprise Edition v10.2.2823?

          -

          There are many reasons why you should use FULL iMacros Enterprise Edition v10.2.2823 to automate your web tasks. Here are some of them:

          -
            -
          • You can save time and money by automating repetitive and tedious web tasks that would otherwise take hours or days to complete manually.
          • -
          • You can improve your productivity and efficiency by performing multiple web tasks simultaneously or in batches.
          • -
          • You can enhance your accuracy and reliability by eliminating human errors and inconsistencies that may occur when performing web tasks manually.
          • -
          • You can increase your flexibility and creativity by customizing your web automation scripts with various commands, variables, loops, conditions, and more.
          • -
          • You can expand your capabilities and opportunities by integrating your web automation scripts with other applications, databases, APIs, or web services.
          • -
          -

          How to download and install FULL iMacros Enterprise Edition v10.2.2823?

          -

          If you are interested in using FULL iMacros Enterprise Edition v10.2.2823 to automate your web tasks, you can download and install it on your computer by following these steps:

          -

          -
            -
          1. Go to the official website of iMacros and click on the "Download" button.
          2. -
          3. Select the "Enterprise Edition" option and fill out the form with your name, email address, company name, phone number, and country.
          4. -
          5. You will receive an email with a link to download the software. Click on the link and save the file to your computer.
          6. -
          7. Run the file and follow the instructions to install the software on your computer.
          8. -
          9. You will need to activate the software with a license key that you will receive by email after purchasing the software.
          10. -
          - -

          Congratulations! You have successfully downloaded and installed FULL iMacros Enterprise Edition v10.2.2823 on your computer. You can now start using it to automate your web tasks with ease. d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Gemvision Matrix 8 Full Crack 11.md b/spaces/scedlatioru/img-to-music/example/Gemvision Matrix 8 Full Crack 11.md deleted file mode 100644 index a35de4ecdbffb76ec94f66ec4132e411de205c10..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Gemvision Matrix 8 Full Crack 11.md +++ /dev/null @@ -1,11 +0,0 @@ -

          gemvision matrix 8 full crack 11


          Download Filehttps://gohhs.com/2uEAfD



          - -Sep 4, 2020 - Gemvision Matrix is a versatile and practical jewelry design software. ... Get tangible results in full 3D and rendering. The program is compatible with. -Read moreSer 4, 2020 - Gemvision Matrix is a diverse and practical jewelry design software. ... -Get tangible results in full 3D and rendering. -The program is compatible with Civil 3D, SketchUp, Autodesk and Zbrush and other graphics programs. -Camera and 3D tools are supported. -Gemvision Matrix allows you to create products in 3D and also supports working with the camera in real time. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Zelio Soft V 2.4.1.md b/spaces/scedlatioru/img-to-music/example/Zelio Soft V 2.4.1.md deleted file mode 100644 index 05ccf43f54e74c96f80e8eb6703ec294658727af..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Zelio Soft V 2.4.1.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Zelio Soft V 2.4.1


          Download File 🌟 https://gohhs.com/2uEAlI



          - -Zelio 2 set-up, operation and software instructions. ... V u 4.5 fr es. Electrical equipment should be installed, operated, serviced, and maintained only by qualified ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py b/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py deleted file mode 100644 index 497a00444c4c59725001993a63fe4617e9d323c8..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/common.py +++ /dev/null @@ -1,299 +0,0 @@ -# This file contains modules common to various models - -import math - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.utils.datasets import letterbox -from facelib.detection.yolov5face.utils.general import ( - make_divisible, - non_max_suppression, - scale_coords, - xyxy2xywh, -) - - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -def channel_shuffle(x, groups): - batchsize, num_channels, height, width = x.data.size() - channels_per_group = torch.div(num_channels, groups, rounding_mode="trunc") - - # reshape - x = x.view(batchsize, groups, channels_per_group, height, width) - x = torch.transpose(x, 1, 2).contiguous() - - # flatten - return x.view(batchsize, -1, height, width) - - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class StemBlock(nn.Module): - def __init__(self, c1, c2, k=3, s=2, p=None, g=1, act=True): - super().__init__() - self.stem_1 = Conv(c1, c2, k, s, p, g, act) - self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0) - self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1) - self.stem_2p = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) - self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0) - - def forward(self, x): - stem_1_out = self.stem_1(x) - stem_2a_out = self.stem_2a(stem_1_out) - stem_2b_out = self.stem_2b(stem_2a_out) - stem_2p_out = self.stem_2p(stem_1_out) - return self.stem_3(torch.cat((stem_2b_out, stem_2p_out), 1)) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.LeakyReLU(0.1, inplace=True) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) - - -class ShuffleV2Block(nn.Module): - def __init__(self, inp, oup, stride): - super().__init__() - - if not 1 <= stride <= 3: - raise ValueError("illegal stride value") - self.stride = stride - - branch_features = oup // 2 - - if self.stride > 1: - self.branch1 = nn.Sequential( - self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(inp), - nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - else: - self.branch1 = nn.Sequential() - - self.branch2 = nn.Sequential( - nn.Conv2d( - inp if (self.stride > 1) else branch_features, - branch_features, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(branch_features), - nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - - @staticmethod - def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False): - return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i) - - def forward(self, x): - if self.stride == 1: - x1, x2 = x.chunk(2, dim=1) - out = torch.cat((x1, self.branch2(x2)), dim=1) - else: - out = torch.cat((self.branch1(x), self.branch2(x)), dim=1) - out = channel_shuffle(out, 2) - return out - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - - -class AutoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - img_size = 640 # inference size (pixels) - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super().__init__() - self.model = model.eval() - - def autoshape(self): - print("autoShape already enabled, skipping... ") # model already converted to model.autoshape() - return self - - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=720, width=1280, RGB images example inputs are: - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(720,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(720,1280,3) - # numpy: = np.zeros((720,1280,3)) # HWC - # torch: = torch.zeros(16,3,720,1280) # BCHW - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1 = [], [] # image and inference shapes - for i, im in enumerate(imgs): - im = np.array(im) # to numpy - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = size / max(s) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255.0 # uint8 to fp16/32 - - # Inference - with torch.no_grad(): - y = self.model(x, augment, profile)[0] # forward - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - - # Post-process - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - return Detections(imgs, y, self.names) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, names=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1.0, 1.0], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) - - def __len__(self): - return self.n - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names) for i in range(self.n)] - for d in x: - for k in ["imgs", "pred", "xyxy", "xyxyn", "xywh", "xywhn"]: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x diff --git a/spaces/sdadas/pirb/README.md b/spaces/sdadas/pirb/README.md deleted file mode 100644 index 5ea7799b0ec76e681e28f97ce665d30b3d492d3a..0000000000000000000000000000000000000000 --- a/spaces/sdadas/pirb/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Polish Information Retrieval Benchmark (PIRB) -emoji: 📈 -colorFrom: blue -colorTo: indigo -sdk: static -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/sdhsdhk/bingo111/src/components/ui/sheet.tsx b/spaces/sdhsdhk/bingo111/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/sdhsdhk/bingosjj/src/components/chat-history.tsx b/spaces/sdhsdhk/bingosjj/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
          -
          - 历史记录 -
          -
          -
          -
          -
          -
          -
          - -
          -

          无标题的聊天

          -
          -

          上午1:42

          -
          - - - - - - - - -
          -
          -
          -
          -
          -
          -
          -
          - ) -} diff --git a/spaces/sdhsdhk/bingosjj/src/lib/isomorphic/index.ts b/spaces/sdhsdhk/bingosjj/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py b/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py deleted file mode 100644 index 2bf15236a3eb24d8526073bc4fa2b274cccb3f96..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/modeling/common.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - -from typing import Type - - -class MLPBlock(nn.Module): - def __init__( - self, - embedding_dim: int, - mlp_dim: int, - act: Type[nn.Module] = nn.GELU, - ) -> None: - super().__init__() - self.lin1 = nn.Linear(embedding_dim, mlp_dim) - self.lin2 = nn.Linear(mlp_dim, embedding_dim) - self.act = act() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.lin2(self.act(self.lin1(x))) - - -# From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa -# Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa -class LayerNorm2d(nn.Module): - def __init__(self, num_channels: int, eps: float = 1e-6) -> None: - super().__init__() - self.weight = nn.Parameter(torch.ones(num_channels)) - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.eps = eps - - def forward(self, x: torch.Tensor) -> torch.Tensor: - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x diff --git a/spaces/shabnam91/Sanskrit-TTS/monotonic_align/__init__.py b/spaces/shabnam91/Sanskrit-TTS/monotonic_align/__init__.py deleted file mode 100644 index 49e32c9a128aeadc2044c362ff27f6a43f6d7815..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/shabnam91/Sanskrit-TTS/transforms.py b/spaces/shabnam91/Sanskrit-TTS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/assign_fixed_chains.py b/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/assign_fixed_chains.py deleted file mode 100644 index 0dcf7b688d177d6c83129d4e1e44c75cd254f44a..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/assign_fixed_chains.py +++ /dev/null @@ -1,39 +0,0 @@ -import argparse - -def main(args): - import json - - with open(args.input_path, 'r') as json_file: - json_list = list(json_file) - - global_designed_chain_list = [] - if args.chain_list != '': - global_designed_chain_list = [str(item) for item in args.chain_list.split()] - my_dict = {} - for json_str in json_list: - result = json.loads(json_str) - all_chain_list = [item[-1:] for item in list(result) if item[:9]=='seq_chain'] #['A','B', 'C',...] - if len(global_designed_chain_list) > 0: - designed_chain_list = global_designed_chain_list - else: - #manually specify, e.g. - designed_chain_list = ["A"] - fixed_chain_list = [letter for letter in all_chain_list if letter not in designed_chain_list] #fix/do not redesign these chains - my_dict[result['name']]= (designed_chain_list, fixed_chain_list) - - with open(args.output_path, 'w') as f: - f.write(json.dumps(my_dict) + '\n') - - -if __name__ == "__main__": - argparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - argparser.add_argument("--input_path", type=str, help="Path to the parsed PDBs") - argparser.add_argument("--output_path", type=str, help="Path to the output dictionary") - argparser.add_argument("--chain_list", type=str, default='', help="List of the chains that need to be designed") - - args = argparser.parse_args() - main(args) - -# Output looks like this: -# {"5TTA": [["A"], ["B"]], "3LIS": [["A"], ["B"]]} - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Android 12 Icon Pack APK - Soft and Colourful Icons for Your Screen.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Android 12 Icon Pack APK - Soft and Colourful Icons for Your Screen.md deleted file mode 100644 index e055a51822a560878bc510bed4afa8bcc7c5ed2c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Android 12 Icon Pack APK - Soft and Colourful Icons for Your Screen.md +++ /dev/null @@ -1,181 +0,0 @@ -
          - - -
          -

          Android 12 Icon Pack Apkpure: A Review

          -

          If you are looking for a way to spice up your Android device's appearance, you might want to try an icon pack. An icon pack is an app that replaces the default icons of your apps with custom ones that match a certain theme or style. Icon packs can give your device a fresh new look and make it more personalized.

          -

          In this article, we will review one of the icon packs that you can find on Apkpure, a third-party app store that offers free and safe downloads of APK files for Android apps and games. The icon pack we will review is called Android 12 Icon Pack Apkpure, a free app that customizes your app icons with a shaded touch and a pencil style inspired by Android 12.

          -

          android 12 icon pack apkpure


          DOWNLOADhttps://ssurll.com/2uNTFq



          -

          We will cover the following topics:

          -
            -
          • What is Android 12?
          • -
          • What is Apkpure?
          • -
          • What is Android 12 Icon Pack Apkpure?
          • -
          • How to download and install Android 12 Icon Pack Apkpure?
          • -
          • How to use Android 12 Icon Pack Apkpure?
          • -
          • Pros and cons of Android 12 Icon Pack Apkpure?
          • -
          • Alternatives to Android 12 Icon Pack Apkpure?
          • -
          -

          By the end of this article, you will have a better idea of whether this icon pack is suitable for you or not.

          -

          What is Android 12?

          -

          Android 12 is the latest version of the Android operating system, developed by Google and released in October 2023. Android 12 introduces several new features and changes that aim to improve the user experience, privacy, and security of Android devices. Some of the main features and changes of Android 12 are :

          -
            -
          • Material You: A new design language that adapts to your personal preferences, such as your wallpaper, theme, and accent colors. Material You also allows you to customize the shapes, sizes, and fonts of your icons, widgets, and menus.
          • -
          • Color extraction: A feature that automatically extracts the dominant and complementary colors from your wallpaper and applies them to your system UI and apps.
          • -
          • Responsive motion: A feature that adds smooth animations and transitions to your device's UI, making it more fluid and responsive.
          • -
          • Conversation widgets: A feature that lets you access your recent messages, calls, and notifications from your favorite contacts on your home screen.
          • -
          • Accessibility improvements: A feature that adds new accessibility options, such as color correction, magnification gestures, and one-handed mode.
          • -
          -

          Android 12 also brings some enhancements to the performance, battery life, privacy, and security of your device. For example, you can now see which apps are accessing your microphone, camera, or location in real time, and revoke their permissions with a single tap. You can also use the Privacy Dashboard to see how often apps access your sensitive data and manage your permissions more easily.

          -

          What is Apkpure?

          -

          Apkpure is a third-party app store that offers free and safe downloads of APK files for Android apps and games. APK files are the installation files for Android apps that you can use to install them on your device without using the Google Play Store. Apkpure has a large collection of APK files for various categories of apps and games, such as social media, entertainment, education, productivity, sports, and more.

          -

          Apkpure has some advantages over the Google Play Store, such as :

          -

          android 12 icon pack free download apkpure
          -android 12 icon pack pro apk apkpure
          -android 12 icon pack pencil style apkpure
          -android 12 icon pack pixel edition apkpure
          -android 12 icon pack minimal apkpure
          -android 12 icon pack neon apkpure
          -android 12 icon pack dark mode apkpure
          -android 12 icon pack colorful apkpure
          -android 12 icon pack gradient apkpure
          -android 12 icon pack glass apkpure
          -android 12 icon pack launcher apkpure
          -android 12 icon pack wallpaper apkpure
          -android 12 icon pack theme apkpure
          -android 12 icon pack mod apk apkpure
          -android 12 icon pack premium apk apkpure
          -android 12 icon pack latest version apkpure
          -android 12 icon pack update apkpure
          -android 12 icon pack review apkpure
          -android 12 icon pack tutorial apkpure
          -android 12 icon pack tips and tricks apkpure
          -android 12 icon pack best settings apkpure
          -android 12 icon pack customization apkpure
          -android 12 icon pack comparison apkpure
          -android 12 icon pack alternatives apkpure
          -android 12 icon pack features apkpure
          -android 12 icon pack compatibility apkpure
          -android 12 icon pack support apkpure
          -android 12 icon pack feedback apkpure
          -android 12 icon pack rating apkpure
          -android 12 icon pack download link apkpure
          -android 12 icon pack installation guide apkpure
          -android 12 icon pack how to use apkpure
          -android 12 icon pack faq apkpure
          -android 12 icon pack troubleshooting apkpure
          -android 12 icon pack bug report apkpure
          -android 12 icon pack developer contact apkpure
          -android 12 icon pack license key apkpure
          -android 12 icon pack refund policy apkpure
          -android 12 icon pack privacy policy apkpure
          -android 12 icon pack terms and conditions apkpure
          -android 12 icon pack screenshots apkpure
          -android 12 icon pack video demo apkpure
          -android 12 icon pack testimonials apkpure
          -android 12 icon pack user reviews apkpure
          -android 12 icon pack forum discussion apkpure
          -android 12 icon pack social media share apkpure
          -android 12 icon pack app description apkpure
          -android 12 icon pack app size and requirements apkpure

          -
            -
          • No region restrictions: You can download any app or game that is not available in your country or region.
          • -
          • No device compatibility issues: You can download any app or game that is not compatible with your device or Android version.
          • -
          • No update delays: You can download the latest versions of apps and games as soon as they are released by the developers.
          • -
          • No ads or in-app purchases: You can download modded or hacked versions of apps and games that remove ads or unlock premium features for free.
          • -
          -

          However, Apkpure also has some drawbacks and risks that you should be aware of before using it. For example :

          -
            -
          • No quality assurance: You cannot rely on the ratings, reviews, or verification of the apps and games on Apkpure. Some of them may be fake, outdated, or malicious.
          • -
          • No automatic updates: You have to manually check for updates and download them from Apkpure. You may miss some important bug fixes or security patches.
          • -
          • No warranty or support: You cannot get any refund, replacement, or technical support from the developers or Google if you encounter any problems with the apps or games from Apkpure.
          • -
          • Potential legal issues: You may violate the terms of service or intellectual property rights of the developers or Google if you download or use unauthorized or modified versions of apps or games from Apkpure.
          • -
          -

          Therefore, you should use Apkpure at your own risk and discretion. You should also make sure that you have a reliable antivirus app on your device to scan the APK files before installing them.

          -

          What is Android 12 Icon Pack Apkpure?

          -

          Android 12 Icon Pack Apkpure is a free icon pack app that customizes your app icons with a shaded touch and a pencil style inspired by Android 12. It has over 5000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          How to download and install Android 12 Icon Pack Apkpure?

          -

          If you want to download and install Android 12 Icon Pack Apkpure on your device, you can follow these steps:

          -
            -
          1. Open your web browser and go to the Apkpure website: https://apkpure.com/
          2. -
          3. In the search box, type "Android 12 Icon Pack Apkpure" and hit enter.
          4. -
          5. From the search results, select the app that has the same name and icon as shown below:
          6. -
          - Android 12 Icon Pack Apkpure app on Apkpure -
            -
          1. On the app page, click on the green "Download APK" button and wait for the download to finish.
          2. -
          3. Once the download is complete, locate the APK file on your device and tap on it to open it.
          4. -
          5. You may see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source." To proceed, tap on "Settings" and enable the option "Allow from this source."
          6. -
          7. Go back to the APK file and tap on it again. You will see a confirmation message that says "Do you want to install this application?" Tap on "Install" and wait for the installation to finish.
          8. -
          9. Once the installation is complete, you can open the app from your app drawer or home screen.
          10. -
          -

          How to use Android 12 Icon Pack Apkpure?

          -

          If you want to use Android 12 Icon Pack Apkpure on your device, you can follow these steps:

          -
            -
          1. Open the app from your app drawer or home screen. You will see a welcome screen that shows some information about the app and its features. Tap on "Next" to continue.
          2. -
          3. You will see a screen that asks you to select a launcher that you want to apply the icon pack to. You can choose from a list of supported launchers or use any other launcher that supports icon packs. Tap on your preferred launcher and then tap on "Next".
          4. -
          5. You will see a screen that shows a preview of how your icons will look like after applying the icon pack. You can also change some settings such as icon size, shape, color, label, etc. by tapping on the gear icon at the top right corner. When you are satisfied with your settings, tap on "Apply".
          6. -
          7. You will see a confirmation message that says "Icon pack applied successfully". You can now enjoy your new icons on your device.
          8. -
          -

          Pros and cons of Android 12 Icon Pack Apkpure?

          -

          Here is a table that compares the pros and cons of using Android 12 Icon Pack Apkpure:

          - - - - - - - - - -
          ProsCons
          • Free and easy to download and install from Apkpure.
          • Compatible with most of the popular launchers for Android devices.
          • Customizable with various settings for icon size, shape, color, label, etc.
          • Inspired by Android 12 design language with shaded touch and pencil style.
          • Supports over 5000 icons for popular apps and games.
          • Has a request feature for adding more icons in future updates.
          • Not available on Google Play Store or official website of the developer.
          • May not work well with some launchers or devices that do not support icon packs.
          • May not match well with some wallpapers or themes that have different colors or styles.
          • May not be updated regularly or frequently by the developer.
          • May contain some bugs or errors that affect the performance or appearance of the icons.
          • May pose some risks or issues related to privacy, security, or legality when downloading from Apkpure.
          -

          Alternatives to Android 12 Icon Pack Apkpure?

          -

          If you are not satisfied with Android 12 Icon Pack Apkpure or want to try some other icon pack apps that have different styles or more options for your icons, here are some alternatives that you can check out:

          -

          Outline Icons

          -

          This is a neon-style icon pack with bright colors and simple shapes. It has over 6000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Outline Icons from the Google Play Store or the official website of the developer: https://outlineicons.com/

          -

          Lux Dark: gradient icons

          -

          This is a circular icon pack with gradient-steeped color accents and dark backgrounds. It has over 4000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Lux Dark: gradient icons from the Google Play Store or the official website of the developer: https://luxdark.com/

          -

          Whicons

          -

          This is a minimalist icon pack with white icons and transparent backgrounds. It has over 7000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Whicons from the Google Play Store or the official website of the developer: https://whicons.com/

          Zwart

          -

          This is a dark version of Whicons with black icons and transparent backgrounds. It has over 7000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Zwart from the Google Play Store or the official website of the developer: https://zwart.com/

          -

          Unicorn Icon Pack

          -

          This is a colorful icon pack with vivid pinks, purples, blues, and greens. It has over 5000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Unicorn Icon Pack from the Google Play Store or the official website of the developer: https://unicorniconpack.com/

          -

          Ombre - Icon Pack

          -

          This is a shaped icon pack with ombre effects and vibrant colors. It has over 4000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices, such as Nova Launcher, Apex Launcher, ADW Launcher, Go Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Ombre - Icon Pack from the Google Play Store or the official website of the developer: https://ombreiconpack.com/

          -

          Fluidity - Adaptive Icon Pack

          -

          This is an adaptive icon pack that follows the shape and style of your launcher. It has over 3000 icons for popular apps and games, as well as some generic icons for categories such as folders, tools, music, etc. It also supports most of the popular launchers for Android devices that support adaptive icons, such as Nova Launcher, Action Launcher, Lawnchair Launcher, etc.

          -

          The icon pack app has a simple and user-friendly interface that lets you apply the icon pack with a few taps. You can also customize the icons according to your preference by changing their size, shape, color, label, etc. The app also has a request feature that allows you to request icons for apps that are not supported by the icon pack. The developer claims that he will try to add the requested icons in the next updates.

          -

          You can download Fluidity - Adaptive Icon Pack from the Google Play Store or the official website of the developer: https://fluidityiconpack.com/

          -

          Conclusion

          -

          In this article, we have reviewed Android 12 Icon Pack Apkpure, a free icon pack app that customizes your app icons with a shaded touch and a pencil style inspired by Android 12. We have also covered some of the main features and changes of Android 12, as well as some of the advantages and disadvantages of using Apkpure as a third-party app store. Finally, we have listed some alternatives to Android 12 Icon Pack Apkpure that you can try if you want a different style or more options for your icons.

          -

          We hope that this article has helped you to decide whether Android 12 Icon Pack Apkpure is suitable for you or not. If you want to give it a try, you can download it from Apkpure and follow the steps we have provided to install and use it on your device. However, if you are not satisfied with it or want to explore more icon pack apps, you can check out the alternatives we have suggested and see which one suits your taste and needs better.

          -

          Thank you for reading this article and have a great day!

          -

          FAQs

          -

          Here are some common questions and answers about Android 12 Icon Pack Apkpure and related topics:

          -
            -
          1. Q: Do I need to root my device to use Android 12 Icon Pack Apkpure?
          2. -
          3. A: No, you do not need to root your device to use Android 12 Icon Pack Apkpure. However, you do need to enable unknown sources on your device settings to install the APK file from Apkpure.
          4. -
          5. Q: Will Android 12 Icon Pack Apkpure work on any Android device?
          6. -
          7. A: Android 12 Icon Pack Apkpure will work on most of the Android devices that support icon packs. However, some launchers or devices may not support icon packs or may have compatibility issues with this icon pack app. In that case, you may need to use a different launcher or icon pack app.
          8. -
          9. Q: How can I update Android 12 Icon Pack Apkpure?
          10. -
          11. A: You can update Android 12 Icon Pack Apkpure by downloading the latest version of the APK file from Apkpure and installing it over the existing app. Alternatively, you can check for updates within the app by tapping on the menu icon at the top left corner and selecting "Check for updates".
          12. -
          13. Q: How can I uninstall Android 12 Icon Pack Apkpure?
          14. -
          15. A: You can uninstall Android 12 Icon Pack Apkpure by going to your device settings, selecting "Apps", finding the app in the list, and tapping on "Uninstall". Alternatively, you can long-press on the app icon on your home screen or app drawer and drag it to the "Uninstall" option.
          16. -
          17. Q: Is Android 12 Icon Pack Apkpure safe to use?
          18. -
          19. A: Android 12 Icon Pack Apkpure is safe to use as long as you download it from a trusted source like Apkpure. However, you should always scan the APK file with an antivirus app before installing it on your device. You should also be careful about granting permissions to the app and managing your privacy and security settings.
          20. -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Instagram APK on Your iPhone 6 in Minutes.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Instagram APK on Your iPhone 6 in Minutes.md deleted file mode 100644 index e48ac147c4079a4345915d65e596b77625619db9..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install Instagram APK on Your iPhone 6 in Minutes.md +++ /dev/null @@ -1,100 +0,0 @@ -
          -

          Download Instagram APK for iPhone 6

          -

          Instagram is one of the most popular social media apps in the world, with over one billion monthly active users. It allows you to share your photos and videos with your friends, family, and followers, as well as discover new content from people you might like. Whether you want to showcase your creativity, express yourself, or stay in touch with your loved ones, Instagram is the app for you.

          -

          download instagram apk for iphone 6


          DOWNLOADhttps://ssurll.com/2uO020



          -

          But how can you download Instagram APK for iPhone 6? In this article, we will show you how to do it in a few simple steps. We will also give you some tips and tricks on how to use Instagram on your iPhone 6 and make the most of its features. Let's get started!

          -

          What is Instagram and why you should download it

          -

          Instagram is a free app that lets you capture and share your moments with the world. You can take photos and videos, apply filters and stickers, add captions and hashtags, and post them on your feed or story. You can also browse through other users' posts, like and comment on them, or send them direct messages. You can also watch short videos called reels, or go live and interact with your viewers in real time.

          -

          Instagram is more than just a photo-sharing app. It is also a platform where you can connect with people who share your interests, passions, or hobbies. You can follow celebrities, influencers, brands, or organizations that inspire you, or join communities that suit your niche. You can also explore new trends, topics, or places through the explore tab, or shop for products that catch your eye through the shop tab.

          -

          Instagram is a fun and easy way to express yourself and stay updated with what's happening around you. It is also a great tool to showcase your talent, promote your business, or build your personal brand. By downloading Instagram APK for iPhone 6, you can join the global community of Instagrammers and enjoy all the benefits of this amazing app.

          -

          How to download Instagram APK for iPhone 6

          -

          Downloading Instagram APK for iPhone 6 is not difficult at all. You just need to follow these four steps:

          -

          Step 1: Update your iPhone to the latest iOS version

          -

          In order to install Instagram on your iPhone 6, you'll need to update it to the most recent version of iOS that your iPhone supports. In your case, this would be version 12.5.7. To do this, go to Settings > General > Software Update and tap on Download and Install. This will ensure that your iPhone is compatible with the latest version of Instagram.

          -

          Step 2: Go to the App Store and search for Instagram

          -

          Once your iPhone is updated, open the App Store app and type "Instagram" in the search bar. You should see the Instagram app icon with a camera logo and a gradient background. Tap on it to open its page.

          -

          Step 3: Tap on the Get button and install the app

          -

          On the Instagram app page, you will see a blue button that says Get. Tap on it to start downloading the app. You may need to enter your Apple ID password or use Touch ID or Face ID to confirm the installation. Wait for a few seconds until the app is installed on your iPhone.

          -

          How to download instagram apk for iphone 6 on ios 12
          -Instagram app store download for iphone 6
          -Download instagram on older iphone 6 ios 12
          -Instagram apk for iphone 6 free download
          -Download instagram latest version for iphone 6
          -How to install instagram apk on iphone 6
          -Instagram apk download for ios 12 iphone 6
          -Download instagram on iphone 6 without app store
          -Instagram apk for iphone 6 plus download
          -Download instagram stories on iphone 6 apk
          -Instagram apk for iphone 6s download
          -How to update instagram on iphone 6 apk
          -Download instagram reels on iphone 6 apk
          -Instagram apk for iphone 6 ios 11 download
          -Download instagram lite apk for iphone 6
          -Instagram apk for iphone 6 ios 10 download
          -Download instagram mod apk for iphone 6
          -Instagram apk for iphone 6 ios 9 download
          -Download instagram video downloader apk for iphone 6
          -Instagram apk for iphone 6 ios 8 download
          -Download instagram dark mode apk for iphone 6
          -Instagram apk for iphone 6 ios 7 download
          -Download instagram live apk for iphone 6
          -Instagram apk for iphone 6 ios 6 download
          -Download instagram direct apk for iphone 6
          -Instagram apk for iphone se download
          -How to fix instagram not working on iphone 6 apk
          -Instagram apk for ipad mini download
          -How to delete instagram account on iphone 6 apk
          -Instagram apk for ipad air download
          -How to change instagram password on iphone 6 apk
          -Instagram apk for ipad pro download
          -How to create instagram account on iphone 6 apk
          -Instagram apk for ipod touch download
          -How to login to instagram on iphone 6 apk
          -Instagram apk for macbook download
          -How to logout of instagram on iphone 6 apk
          -Instagram apk for windows download
          -How to edit profile on instagram on iphone 6 apk
          -Instagram apk for android download
          -How to post on instagram on iphone 6 apk
          -Instagram beta apk for iphone download
          -How to dm on instagram on iphone 6 apk
          -Instagram old version apk for iphone download
          -How to follow on instagram on iphone 6 apk
          -Instagram new version apk for iphone download

          -

          Step 4: Launch the app and sign up or log in

          -

          After the installation is complete, you can launch the app by tapping on its icon on your home screen. You will see a welcome screen that asks you to sign up or log in. If you already have an Instagram account, you can enter your username and password or use the Log in with Facebook option. If you don't have an account yet, you can tap on Sign up and enter your email address, phone number, or Facebook account. You will also need to create a username and password, and optionally add a profile photo and a bio. Once you're done, you can start using Instagram on your iPhone 6.

          -

          Tips and tricks for using Instagram on iPhone 6

          -

          Now that you have downloaded Instagram APK for iPhone 6, you might be wondering how to use it effectively. Here are some tips and tricks that will help you make the most of your Instagram experience:

          -

          How to edit your profile and settings

          -

          To edit your profile and settings, tap on the profile icon at the bottom right corner of the app. You will see your profile page with your posts, followers, and following. To edit your profile, tap on the Edit Profile button. You can change your name, username, bio, website, and profile photo. You can also switch to a professional account if you want to access more features like insights, promotions, and shopping.

          -

          To edit your settings, tap on the menu icon at the top right corner of your profile page. You will see a list of options like Archive, Insights, Saved, Close Friends, and more. Tap on Settings at the bottom of the list. You can customize your account settings like privacy, security, notifications, and more. You can also manage your linked accounts, preferences, and help center.

          -

          How to post photos and videos

          -

          To post photos and videos on Instagram, tap on the plus icon at the bottom center of the app. You will see three options: Feed, Story, and Reel. Tap on Feed if you want to post on your main feed that will be visible to all your followers and anyone who visits your profile. Tap on Story if you want to post a temporary photo or video that will disappear after 24 hours and will be visible to your followers and anyone who replies to your story. Tap on Reel if you want to post a short video with music and effects that will be visible to everyone on the explore tab and anyone who follows the hashtags or sounds you use.

          -

          After choosing an option, you can either take a new photo or video using the camera button or select one from your gallery using the library button. You can also use the boomerang button to create a looping video or the layout button to create a collage. Once you have selected or taken a photo or video, you can edit it using the tools at the top of the screen. You can crop, rotate, adjust, filter, or add stickers to your photo or video. You can also add text or draw on it using the buttons at the top right corner of the screen.

          -

          When you're done editing, tap on Next at the top right corner of the screen. You will see a screen where you can add a caption, tag people, add a location, or share to other apps. You can also choose who can see your post by tapping on Advanced Settings and selecting Close Friends or Hide Story From. When you're ready to post, tap on Share at the top right corner of the screen.

          -

          How to use filters and stickers

          -

          Filters and stickers are fun ways to enhance your photos and videos on Instagram. To use filters, swipe left or right on the camera screen before taking a photo or video. You will see a variety of filters that change the color, mood, or style of your photo or video. You can also tap on the filter icon at the bottom right corner of the screen to browse more filters or create your own.

          -

          To use stickers, tap on the sticker icon at the top of the screen after taking a photo or video. You will see a list of stickers that you can add to your photo or video. Some stickers are interactive, like polls, questions, countdowns, or quizzes, that let you engage with your audience. You can also use stickers to add emojis, gifs, music, or time and weather to your photo or video. You can resize, rotate, or move the stickers around by using your fingers. You can also tap on the sticker to change its appearance or settings.

          -

          How to follow and interact with other users

          -

          One of the best things about Instagram is that you can follow and interact with other users who share your interests, passions, or hobbies. To follow someone, you can either search for their username using the magnifying glass icon at the bottom of the app, or tap on their name or profile photo when you see their post or story. You will see a blue button that says Follow. Tap on it to start following them. You can also tap on the message icon next to the follow button to send them a direct message.

          -

          To interact with someone, you can either like or comment on their post, reply or react to their story, or send them a direct message. To like a post, double-tap on it or tap on the heart icon below it. To comment on a post, tap on the speech bubble icon below it and type your comment. To reply or react to a story, swipe up on it and choose an emoji or type your message. To send a direct message, tap on the paper plane icon at the top right corner of the app and select the person you want to chat with. You can also send photos, videos, voice messages, stickers, or gifs in your direct messages.

          -

          How to use stories and reels

          -

          Stories and reels are two of the most popular features of Instagram that let you create and watch short videos with music and effects. Stories are temporary videos that disappear after 24 hours and are visible to your followers and anyone who replies to your story. Reels are permanent videos that are visible to everyone on the explore tab and anyone who follows the hashtags or sounds you use.

          -

          To create a story, tap on the plus icon at the bottom center of the app and choose Story. You can either take a new photo or video using the camera button or select one from your gallery using the library button. You can also use the boomerang button to create a looping video or the layout button to create a collage. You can edit your story using the tools at the top of the screen, such as filters, stickers, text, or draw. When you're done editing, tap on Next at the top right corner of the screen and choose who can see your story by tapping on Close Friends or Hide Story From. Then tap on Share at the bottom of the screen.

          -

          To create a reel, tap on the plus icon at the bottom center of the app and choose Reel. You can either take a new video using the camera button or select one from your gallery using the library button. You can edit your reel using the tools at the left side of the screen, such as music, speed, effects, timer, or align. You can also trim or delete parts of your video by tapping on the scissors icon at the top of the screen. When you're done editing, tap on Next at the top right corner of the screen and add a caption, tag people, add a location, or share to other apps. You can also choose who can see your reel by tapping on Advanced Settings and selecting Close Friends or Hide Reel From. Then tap on Share at the bottom of the screen.

          -

          Conclusion

          -

          Instagram is a wonderful app that lets you share your photos and videos with the world, as well as discover new content from people you might like. By downloading Instagram APK for iPhone 6, you can enjoy all the features of this app on your device. You can also use some tips and tricks to make your Instagram experience more fun and engaging. We hope this article helped you learn how to download Instagram APK for iPhone 6 and how to use it effectively. Happy Instagramming!

          -

          FAQs

          -

          Here are some frequently asked questions about Instagram APK for iPhone 6:

          -

          Q: What is the difference between Instagram APK and Instagram app?

          -

          A: APK stands for Android Package Kit, which is a file format that allows you to install apps on Android devices. Instagram APK is the file that contains the Instagram app for Android devices. Instagram app is the application that you can download from the App Store or Google Play Store and use on your iOS or Android devices.

          -

          Q: Why do I need to update my iPhone to the latest iOS version?

          -

          A: Updating your iPhone to the latest iOS version ensures that your device is compatible with the latest version of Instagram and that it runs smoothly and securely. It also fixes any bugs or issues that might affect your performance or user experience.

          -

          Q: How can I delete or archive my posts, stories, or reels?

          -

          A: To delete or archive your posts, stories, or reels, go to your profile page and tap on the post, story, or reel that you want to delete or archive. Then tap on the three dots icon at the top right corner of the screen and choose Delete or Archive. Deleting will remove your post, story, or reel permanently from your account, while archiving will hide it from your profile but keep it in a private folder that only you can access.

          -

          Q: How can I switch between multiple accounts on Instagram?

          -

          A: To switch between multiple accounts on Instagram, go to your profile page and tap on your username at the top of the screen. You will see a list of accounts that you have added or logged in to. Tap on the account that you want to switch to. You can also add a new account by tapping on Add Account at the bottom of the list and entering your username and password.

          -

          Q: How can I report or block someone on Instagram?

          -

          A: To report or block someone on Instagram, go to their profile page and tap on the three dots icon at the top right corner of the screen. You will see a list of options like Report, Block, Restrict, or Mute. Report will let you notify Instagram about any inappropriate or abusive behavior from that user. Block will prevent that user from seeing your posts, stories, reels, or messages, and from following or contacting you. Restrict will limit that user's interactions with you without them knowing. Mute will hide that user's posts, stories, reels, or messages from your feed or inbox.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/sklearn-docs/Detection-Error-Tradeoff-Curve/app.py b/spaces/sklearn-docs/Detection-Error-Tradeoff-Curve/app.py deleted file mode 100644 index 1910cc4d269f66c734df16fcb93627a72d5192e1..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Detection-Error-Tradeoff-Curve/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import gradio as gr -import matplotlib.pyplot as plt -from sklearn.datasets import make_classification -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import StandardScaler -from sklearn.ensemble import RandomForestClassifier -from sklearn.pipeline import make_pipeline -from sklearn.svm import LinearSVC -from sklearn.metrics import DetCurveDisplay, RocCurveDisplay - -def generate_synthetic_data(n_samples, n_features, n_redundant, n_informative, random_state, n_clusters_per_class): - X, y = make_classification( - n_samples=n_samples, - n_features=n_features, - n_redundant=n_redundant, - n_informative=n_informative, - random_state=random_state, - n_clusters_per_class=n_clusters_per_class, - ) - return X, y - -def plot_roc_det_curves(classifier_names, svm_c, rf_max_depth, rf_n_estimators, rf_max_features, - n_samples, n_features, n_redundant, n_informative, random_state, n_clusters_per_class): - X, y = generate_synthetic_data(n_samples, n_features, n_redundant, n_informative, random_state, n_clusters_per_class) - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0) - - classifiers = { - "Linear SVM": make_pipeline(StandardScaler(), LinearSVC(C=svm_c)), - "Random Forest": RandomForestClassifier( - max_depth=rf_max_depth, n_estimators=rf_n_estimators, max_features=rf_max_features - ), - } - - fig, [ax_roc, ax_det] = plt.subplots(1, 2, figsize=(11, 5)) - - for classifier_name in classifier_names: - clf = classifiers[classifier_name] - clf.fit(X_train, y_train) - RocCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_roc, name=classifier_name) - DetCurveDisplay.from_estimator(clf, X_test, y_test, ax=ax_det, name=classifier_name) - - ax_roc.set_title("Receiver Operating Characteristic (ROC) curves") - ax_det.set_title("Detection Error Tradeoff (DET) curves") - - ax_roc.grid(linestyle="--") - ax_det.grid(linestyle="--") - - plt.legend() - plt.tight_layout() - - return plt - -parameters = [ - gr.inputs.CheckboxGroup(["Linear SVM", "Random Forest"], label="Classifiers"), - gr.inputs.Slider(0.001, 0.1, step=0.001, default=0.025, label="Linear SVM C"), - gr.inputs.Slider(1, 10, step=1, default=5, label="Random Forest Max Depth"), - gr.inputs.Slider(1, 20, step=1, default=10, label="Random Forest n_estimators"), - gr.inputs.Slider(1, 10, step=1, default=1, label="Random Forest max_features"), - gr.inputs.Slider(100, 2000, step=100, default=1000, label="Number of Samples"), - gr.inputs.Slider(1, 10, step=1, default=2, label="Number of Features"), - gr.inputs.Slider(0, 10, step=1, default=0, label="Number of Redundant Features"), - gr.inputs.Slider(1, 10, step=1, default=2, label="Number of Informative Features"), - gr.inputs.Slider(0, 100, step=1, default=1, label="Random State"), - gr.inputs.Slider(1, 10, step=1, default=1, label="Number of Clusters per Class"), -] - -examples = [ - [ - ["Linear SVM"], - 0.025, - 5, - 10, - 1, - 1000, - 2, - 0, - 2, - 1, - 1, - ], - [ - ["Random Forest"], - 0.025, - 5, - 10, - 1, - 1000, - 2, - 0, - 2, - 1, - 1, - ], - [ - ["Linear SVM", "Random Forest"], - 0.025, - 5, - 10, - 1, - 1000, - 2, - 0, - 2, - 1, - 1, - ] -] - -iface = gr.Interface(title = "Detection error tradeoff (DET) curve", fn=plot_roc_det_curves, inputs=parameters, outputs="plot", description="In this example, we compare two binary classification multi-threshold metrics: the Receiver Operating Characteristic (ROC) and the Detection Error Tradeoff (DET). For such purpose, we evaluate two different classifiers for the same classification task. See the original scikit-learn example here: https://scikit-learn.org/stable/auto_examples/model_selection/plot_det.html", examples=examples) -iface.launch() diff --git a/spaces/skytnt/moe-tts/text/mandarin.py b/spaces/skytnt/moe-tts/text/mandarin.py deleted file mode 100644 index ff71de9788e4f20c897b971a775d1ecfbfe1c7b7..0000000000000000000000000000000000000000 --- a/spaces/skytnt/moe-tts/text/mandarin.py +++ /dev/null @@ -1,329 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - -logging.getLogger('jieba').setLevel(logging.WARNING) -jieba.initialize() - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text diff --git a/spaces/sloppyjoe/doodoodetective/README.md b/spaces/sloppyjoe/doodoodetective/README.md deleted file mode 100644 index 8cbd8bd9518e1e387ff2f97af3f8a027f4e554ce..0000000000000000000000000000000000000000 --- a/spaces/sloppyjoe/doodoodetective/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Doo-doo Detective -emoji: 📊 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sparswan/AW-02-H5-AR-VR-IOT/index.html b/spaces/sparswan/AW-02-H5-AR-VR-IOT/index.html deleted file mode 100644 index f64aad6580cd12cbdbb0bcc0321ed7a6486d2a19..0000000000000000000000000000000000000000 --- a/spaces/sparswan/AW-02-H5-AR-VR-IOT/index.html +++ /dev/null @@ -1,66 +0,0 @@ - - - - Dynamic Lights - A-Frame - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/fconv_self_att.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/fconv_self_att.py deleted file mode 100644 index 8357ef7847ed25a62345e219c41906156828c233..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/fconv_self_att.py +++ /dev/null @@ -1,674 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import math -import os - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.models import ( - CompositeEncoder, - FairseqDecoder, - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - DownsampledMultiHeadAttention, - FairseqDropout, - GradMultiply, - LayerNorm, - LearnedPositionalEmbedding, - LinearizedConvolution, -) - - -logger = logging.getLogger(__name__) - - -@register_model("fconv_self_att") -class FConvModelSelfAtt(FairseqEncoderDecoderModel): - @classmethod - def hub_models(cls): - return { - "conv.stories.pretrained": { - "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz", - "checkpoint_file": "pretrained_checkpoint.pt", - "tokenizer": "nltk", - }, - "conv.stories": { - "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz", - "checkpoint_file": "fusion_checkpoint.pt", - "tokenizer": "nltk", - "pretrained": "True", - "pretrained_checkpoint": "./pretrained_checkpoint.pt", - }, - # Test set containing dictionaries - "data.stories": "https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2", - } - - def __init__(self, encoder, decoder, pretrained_encoder=None): - super().__init__(encoder, decoder) - self.encoder.num_attention_layers = sum( - layer is not None for layer in decoder.attention - ) - self.pretrained_encoder = pretrained_encoder - if self.pretrained_encoder is None: - encoders = {"encoder": encoder} - else: - encoders = {"encoder": encoder, "pretrained": self.pretrained_encoder} - # for fusion model, CompositeEncoder contains both pretrained and training encoders - # these are forwarded and then combined in the decoder - self.encoder = CompositeEncoder(encoders) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-layers', type=str, metavar='EXPR', - help='encoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-layers', type=str, metavar='EXPR', - help='decoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='EXPR', - help='decoder attention [True, ...]') - parser.add_argument('--self-attention', type=str, metavar='EXPR', - help='decoder self-attention layers, ex: [True] + [False]*5') - parser.add_argument('--multihead-attention-nheads', type=int, - help='Number of heads to use in attention') - parser.add_argument('--multihead-self-attention-nheads', type=int, - help='Number of heads to use in self-attention') - parser.add_argument('--encoder-attention', type=str, metavar='EXPR', - help='encoder attention [True, ...]') - parser.add_argument('--encoder-attention-nheads', type=int, - help='Number of heads to use in encoder attention') - parser.add_argument('--project-input', type=str, metavar='EXPR', - help='Use projections in self-attention [True, ...]') - parser.add_argument('--gated-attention', type=str, metavar='EXPR', - help='Use GLU layers in self-attention projections [True, ...]') - parser.add_argument('--downsample', type=str, metavar='EXPR', - help='Use downsampling in self-attention [True, ...]') - parser.add_argument('--pretrained-checkpoint', metavar='DIR', - help='path to load checkpoint from pretrained model') - parser.add_argument('--pretrained', type=str, metavar='EXPR', - help='use pretrained model when training [True, ...]') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - trained_encoder, trained_decoder = None, None - pretrained = eval(args.pretrained) - if pretrained: - logger.info("loading pretrained model") - if not os.path.exists(args.pretrained_checkpoint): - new_pretrained_checkpoint = os.path.join( - args.data, args.pretrained_checkpoint - ) - if os.path.exists(new_pretrained_checkpoint): - args.pretrained_checkpoint = new_pretrained_checkpoint - trained_model = checkpoint_utils.load_model_ensemble( - filenames=[args.pretrained_checkpoint], - task=task, - )[0][0] - trained_decoder = list(trained_model.children())[1] - trained_encoder = list(trained_model.children())[0] - - # freeze pretrained model - for param in trained_decoder.parameters(): - param.requires_grad = False - for param in trained_encoder.parameters(): - param.requires_grad = False - - encoder = FConvEncoder( - task.source_dictionary, - embed_dim=args.encoder_embed_dim, - convolutions=eval(args.encoder_layers), - dropout=args.dropout, - max_positions=args.max_source_positions, - attention=eval(args.encoder_attention), - attention_nheads=args.encoder_attention_nheads, - ) - - decoder = FConvDecoder( - task.target_dictionary, - embed_dim=args.decoder_embed_dim, - convolutions=eval(args.decoder_layers), - out_embed_dim=args.decoder_out_embed_dim, - attention=eval(args.decoder_attention), - dropout=args.dropout, - max_positions=args.max_target_positions, - selfattention=eval(args.self_attention), - attention_nheads=args.multihead_attention_nheads, - selfattention_nheads=args.multihead_self_attention_nheads, - project_input=eval(args.project_input), - gated_attention=eval(args.gated_attention), - downsample=eval(args.downsample), - pretrained=pretrained, - trained_decoder=trained_decoder, - ) - model = FConvModelSelfAtt(encoder, decoder, trained_encoder) - - return model - - @property - def pretrained(self): - return self.pretrained_encoder is not None - - -class FConvEncoder(FairseqEncoder): - """Convolutional encoder""" - - def __init__( - self, - dictionary, - embed_dim=512, - max_positions=1024, - convolutions=((512, 3),) * 20, - dropout=0.1, - attention=False, - attention_nheads=1, - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.num_attention_layers = None - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - self.embed_positions = PositionalEmbedding( - max_positions, - embed_dim, - self.padding_idx, - ) - - def expand_bool_array(val): - if isinstance(val, bool): - # expand True into [True, True, ...] and do the same with False - return [val] * len(convolutions) - return val - - attention = expand_bool_array(attention) - - in_channels = convolutions[0][0] - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.attention = nn.ModuleList() - self.attproj = nn.ModuleList() - for i, (out_channels, kernel_size) in enumerate(convolutions): - self.projections.append( - Linear(in_channels, out_channels) - if in_channels != out_channels - else None - ) - self.convolutions.append( - ConvTBC(in_channels, out_channels * 2, kernel_size, dropout=dropout) - ) - - self.attention.append( - SelfAttention(out_channels, embed_dim, attention_nheads) - if attention[i] - else None - ) - in_channels = out_channels - - self.fc2 = Linear(in_channels, embed_dim) - - def forward(self, src_tokens, src_lengths): - # embed tokens and positions - x = self.embed_tokens(src_tokens) + self.embed_positions(src_tokens) - x = self.dropout_module(x) - input_embedding = x.transpose(0, 1) - - # project to size of convolution - x = self.fc1(x) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() # -> T x B - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # temporal convolutions - for proj, conv, attention in zip( - self.projections, self.convolutions, self.attention - ): - residual = x if proj is None else proj(x) - - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - x = self.dropout_module(x) - padding_l = (conv.kernel_size[0] - 1) // 2 - padding_r = conv.kernel_size[0] // 2 - x = F.pad(x, (0, 0, 0, 0, padding_l, padding_r)) - x = conv(x) - x = F.glu(x, dim=2) - if attention is not None: - x = attention(x) - x = (x + residual) * math.sqrt(0.5) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # project back to size of embedding - x = self.fc2(x) - - if encoder_padding_mask is not None: - encoder_padding_mask = encoder_padding_mask.t() # -> B x T - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - # scale gradients (this only affects backward, not forward) - x = GradMultiply.apply(x, 1.0 / (2.0 * self.num_attention_layers)) - - # add output to input embedding for attention - y = (x + input_embedding.transpose(0, 1)) * math.sqrt(0.5) - - return { - "encoder_out": (x, y), - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = tuple( - eo.index_select(0, new_order) for eo in encoder_out["encoder_out"] - ) - - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - - if "pretrained" in encoder_out: - encoder_out["pretrained"]["encoder_out"] = tuple( - eo.index_select(0, new_order) - for eo in encoder_out["pretrained"]["encoder_out"] - ) - - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return self.embed_positions.max_positions - - -@with_incremental_state -class FConvDecoder(FairseqDecoder): - """Convolutional decoder""" - - def __init__( - self, - dictionary, - embed_dim=512, - out_embed_dim=256, - max_positions=1024, - convolutions=((512, 3),) * 8, - attention=True, - dropout=0.1, - selfattention=False, - attention_nheads=1, - selfattention_nheads=1, - project_input=False, - gated_attention=False, - downsample=False, - pretrained=False, - trained_decoder=None, - ): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([2])) - self.pretrained = pretrained - self.pretrained_decoder = trained_decoder - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.need_attn = True - in_channels = convolutions[0][0] - - def expand_bool_array(val): - if isinstance(val, bool): - # expand True into [True, True, ...] and do the same with False - return [val] * len(convolutions) - return val - - attention = expand_bool_array(attention) - selfattention = expand_bool_array(selfattention) - - if not isinstance(attention, list) or len(attention) != len(convolutions): - raise ValueError( - "Attention is expected to be a list of booleans of " - "length equal to the number of layers." - ) - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - - self.embed_positions = PositionalEmbedding( - max_positions, - embed_dim, - padding_idx, - ) - - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.attention = nn.ModuleList() - self.selfattention = nn.ModuleList() - self.attproj = nn.ModuleList() - for i, (out_channels, kernel_size) in enumerate(convolutions): - self.projections.append( - Linear(in_channels, out_channels) - if in_channels != out_channels - else None - ) - self.convolutions.append( - LinearizedConv1d( - in_channels, - out_channels * 2, - kernel_size, - padding=(kernel_size - 1), - dropout=dropout, - ) - ) - - self.attention.append( - DownsampledMultiHeadAttention( - out_channels, - embed_dim, - attention_nheads, - project_input=project_input, - gated=False, - downsample=False, - ) - if attention[i] - else None - ) - - self.attproj.append( - Linear(out_channels, embed_dim, dropout=dropout) - if attention[i] - else None - ) - self.selfattention.append( - SelfAttention( - out_channels, - embed_dim, - selfattention_nheads, - project_input=project_input, - gated=gated_attention, - downsample=downsample, - ) - if selfattention[i] - else None - ) - in_channels = out_channels - - self.fc2 = Linear(in_channels, out_embed_dim) - self.fc3 = Linear(out_embed_dim, num_embeddings, dropout=dropout) - - # model fusion - if self.pretrained: - # independent gates are learned from the concatenated input - self.gate1 = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid() - ) - self.gate2 = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid() - ) - # pretrained and trained models are joined - self.joining = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim * 2), - LayerNorm(out_embed_dim * 2), - nn.GLU(), - Linear(out_embed_dim, out_embed_dim * 2), - LayerNorm(out_embed_dim * 2), - nn.GLU(), - Linear(out_embed_dim, out_embed_dim), - LayerNorm(out_embed_dim), - ) - # pretrained model contains an output layer that is nhid -> vocab size - # but the models are combined in their hidden state - # the hook stores the output of the pretrained model forward - self.pretrained_outputs = {} - - def save_output(): - def hook(a, b, output): - self.pretrained_outputs["out"] = output - - return hook - - self.pretrained_decoder.fc2.register_forward_hook(save_output()) - - def forward(self, prev_output_tokens, encoder_out): - trained_encoder_out = encoder_out["pretrained"] if self.pretrained else None - encoder_out = encoder_out["encoder"]["encoder_out"] - - encoder_a, encoder_b = self._split_encoder_out(encoder_out) - - # embed positions - positions = self.embed_positions(prev_output_tokens) - - # embed tokens and positions - x = self.embed_tokens(prev_output_tokens) + positions - x = self.dropout_module(x) - target_embedding = x.transpose(0, 1) - - # project to size of convolution - x = self.fc1(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # temporal convolutions - avg_attn_scores = None - for proj, conv, attention, selfattention, attproj in zip( - self.projections, - self.convolutions, - self.attention, - self.selfattention, - self.attproj, - ): - residual = x if proj is None else proj(x) - - x = self.dropout_module(x) - x = conv(x) - x = F.glu(x, dim=2) - - # attention - if attention is not None: - r = x - x, attn_scores = attention( - attproj(x) + target_embedding, encoder_a, encoder_b - ) - x = x + r - if not self.training and self.need_attn: - if avg_attn_scores is None: - avg_attn_scores = attn_scores - else: - avg_attn_scores.add_(attn_scores) - - if selfattention is not None: - x = selfattention(x) - - x = (x + residual) * math.sqrt(0.5) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - # project back to size of vocabulary - x = self.fc2(x) - x = self.dropout_module(x) - if not self.pretrained: - x = self.fc3(x) - - # fusion gating - if self.pretrained: - trained_x, _ = self.pretrained_decoder.forward( - prev_output_tokens, trained_encoder_out - ) - y = torch.cat([x, self.pretrained_outputs["out"]], dim=-1) - gate1 = self.gate1(y) - gate2 = self.gate2(y) - gated_x1 = gate1 * x - gated_x2 = gate2 * self.pretrained_outputs["out"] - fusion = torch.cat([gated_x1, gated_x2], dim=-1) - fusion = self.joining(fusion) - fusion_output = self.fc3(fusion) - return fusion_output, avg_attn_scores - else: - return x, avg_attn_scores - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return self.embed_positions.max_positions - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def _split_encoder_out(self, encoder_out): - """Split and transpose encoder outputs.""" - # transpose only once to speed up attention layers - encoder_a, encoder_b = encoder_out - encoder_a = encoder_a.transpose(0, 1).contiguous() - encoder_b = encoder_b.transpose(0, 1).contiguous() - result = (encoder_a, encoder_b) - return result - - -class SelfAttention(nn.Module): - def __init__( - self, - out_channels, - embed_dim, - num_heads, - project_input=False, - gated=False, - downsample=False, - ): - super().__init__() - self.attention = DownsampledMultiHeadAttention( - out_channels, - embed_dim, - num_heads, - dropout=0, - bias=True, - project_input=project_input, - gated=gated, - downsample=downsample, - ) - self.in_proj_q = Linear(out_channels, embed_dim) - self.in_proj_k = Linear(out_channels, embed_dim) - self.in_proj_v = Linear(out_channels, embed_dim) - self.ln = LayerNorm(out_channels) - - def forward(self, x): - residual = x - query = self.in_proj_q(x) - key = self.in_proj_k(x) - value = self.in_proj_v(x) - x, _ = self.attention( - query, key, value, mask_future_timesteps=True, use_scalar_bias=True - ) - return self.ln(x + residual) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - m.weight.data.normal_(0, 0.1) - return m - - -def PositionalEmbedding(num_embeddings, embedding_dim, padding_idx): - m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx) - m.weight.data.normal_(0, 0.1) - return m - - -def Linear(in_features, out_features, dropout=0.0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features) - m.weight.data.normal_(mean=0, std=math.sqrt((1 - dropout) / in_features)) - m.bias.data.zero_() - return m - - -def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer optimized for decoding""" - m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - m.weight.data.normal_(mean=0, std=std) - m.bias.data.zero_() - return m - - -def ConvTBC(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer""" - from fairseq.modules import ConvTBC - - m = ConvTBC(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - m.weight.data.normal_(mean=0, std=std) - m.bias.data.zero_() - return m - - -@register_model_architecture("fconv_self_att", "fconv_self_att") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_layers = getattr(args, "encoder_layers", "[(512, 3)] * 3") - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_layers = getattr(args, "decoder_layers", "[(512, 3)] * 8") - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.decoder_attention = getattr(args, "decoder_attention", "True") - args.self_attention = getattr(args, "self_attention", "False") - args.encoder_attention = getattr(args, "encoder_attention", "False") - args.multihead_attention_nheads = getattr(args, "multihead_attention_nheads", 1) - args.multihead_self_attention_nheads = getattr( - args, "multihead_self_attention_nheads", 1 - ) - args.encoder_attention_nheads = getattr(args, "encoder_attention_nheads", 1) - args.project_input = getattr(args, "project_input", "False") - args.gated_attention = getattr(args, "gated_attention", "False") - args.downsample = getattr(args, "downsample", "False") - args.pretrained_checkpoint = getattr(args, "pretrained_checkpoint", "") - args.pretrained = getattr(args, "pretrained", "False") - - -@register_model_architecture("fconv_self_att", "fconv_self_att_wp") -def fconv_self_att_wp(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_layers = getattr( - args, "encoder_layers", "[(128, 3)] * 2 + [(512,3)] * 1" - ) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_layers = getattr( - args, "decoder_layers", "[(512, 4)] * 4 + [(768, 4)] * 2 + [(1024, 4)] * 1" - ) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.self_attention = getattr(args, "self_attention", "True") - args.multihead_self_attention_nheads = getattr( - args, "multihead_self_attention_nheads", 4 - ) - args.project_input = getattr(args, "project_input", "True") - args.gated_attention = getattr(args, "gated_attention", "True") - args.downsample = getattr(args, "downsample", "True") - base_architecture(args) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Arabic Midi File Songsl.md b/spaces/stomexserde/gpt4-ui/Examples/Arabic Midi File Songsl.md deleted file mode 100644 index 51e152e0d614cbd876ae48baac4e1dadd4dc0eb3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Arabic Midi File Songsl.md +++ /dev/null @@ -1,37 +0,0 @@ -
          -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Arabic Midi File Songs": - -

          How to Enjoy Arabic Midi File Songs on Your Computer

          -

          Arabic music is rich and diverse, with many genres and styles that reflect the culture and history of the Arab world. One of the ways to enjoy Arabic music is by listening to Arabic midi file songs, which are digital files that contain musical information that can be played by a computer or a synthesizer. Midi files are small and easy to download, and they can be customized and edited with various software programs. In this article, we will show you how to find, download, and play Arabic midi file songs on your computer.

          - -

          Where to Find Arabic Midi File Songs

          -

          There are many websites that offer free Arabic midi file songs for download. Some of them are:

          -

          Arabic Midi File Songsl


          DOWNLOAD ->>> https://urlgoal.com/2uI9XV



          -
            -
          • The Microtonal Arabic MIDI Palace: This website features remixes of charming Arabic popular music using native Arabic microtonal scales. The songs are realized using a Roland SC-88Pro PCM desktop sound module exclusively. You can listen to the songs online or download them as midi files.
          • -
          • Midis101.com: This website has a large collection of midi files from various genres and languages, including Arabic. You can search for Arabic midi files by name or browse through the categories. You can also preview the songs before downloading them.
          • -
          • SoundCloud: This is a popular online audio platform that allows users to upload, share, and stream music and podcasts. You can find some Arabic midi file songs on SoundCloud by searching for the keyword or following some users who upload them. You can also create your own playlists and share them with others.
          • -
          - -

          How to Download Arabic Midi File Songs

          -

          Downloading Arabic midi file songs is usually very simple and fast. All you need to do is:

          -
            -
          1. Go to the website that offers the midi file you want.
          2. -
          3. Click on the download button or link, or right-click on the midi file and choose "Save link as" or "Save target as".
          4. -
          5. Choose a location on your computer where you want to save the midi file.
          6. -
          7. Wait for the download to finish.
          8. -
          - -

          How to Play Arabic Midi File Songs

          -

          To play Arabic midi file songs on your computer, you need a software program that can read and play midi files. There are many options available, such as:

          -
            -
          • VanBasco's Karaoke Player: This is a free karaoke player that can play midi files and display lyrics on the screen. You can also change the tempo, key, volume, and instruments of the songs.
          • -
          • Synthesia: This is a fun piano game that can play midi files and show you how to play them on a virtual keyboard. You can also connect a real keyboard or piano to your computer and play along with the songs.
          • -
          • FL Studio: This is a powerful digital audio workstation that can play, edit, and create midi files. You can also add effects, instruments, samples, and vocals to your songs.
          • -
          - -

          Conclusion

          -

          Arabic midi file songs are a great way to enjoy Arabic music on your computer. You can find, download, and play them easily with various software programs. You can also customize and edit them to suit your preferences. Whether you want to listen, sing, or learn how to play Arabic music, Arabic midi file songs are a fun and convenient option.

          -

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Automatic Call Recorder Hide App Pro ? CallBOX V5.5 [Premium] [Latest] __LINK__.md b/spaces/stomexserde/gpt4-ui/Examples/Automatic Call Recorder Hide App Pro ? CallBOX V5.5 [Premium] [Latest] __LINK__.md deleted file mode 100644 index 843f40a0a63a0ab776e143aaabd8f1eb65b39b16..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Automatic Call Recorder Hide App Pro ? CallBOX V5.5 [Premium] [Latest] __LINK__.md +++ /dev/null @@ -1,46 +0,0 @@ - -

          How to Record and Hide Your Phone Calls with callBOX v5.5

          -

          If you are looking for a way to record and hide your phone calls, you might want to check out callBOX v5.5, the latest version of the premium automatic call recorder app. callBOX v5.5 is a powerful and easy-to-use app that lets you record any phone call you want and save it to your device or cloud storage. You can also choose to hide the app icon from your launcher and access it with a secret code or gesture.

          -

          Automatic Call Recorder Hide App Pro – callBOX v5.5 [Premium] [Latest]


          Download · https://urlgoal.com/2uI6cr



          -

          In this article, we will show you how to download and install callBOX v5.5 on your Android device, and how to use its features to record and hide your phone calls.

          -

          Download and Install callBOX v5.5

          -

          callBOX v5.5 is a premium app that requires a subscription to unlock all its features. However, you can download the latest version of the app for free from our website. Just follow these steps:

          -
            -
          1. Click on the download link below and save the APK file to your device.
          2. -
          3. Go to your device settings and enable installation from unknown sources.
          4. -
          5. Locate the APK file and tap on it to install it.
          6. -
          7. Launch the app and grant it the necessary permissions to access your phone calls, contacts, storage, and microphone.
          8. -
          9. Sign up for a free trial or log in with your existing account.
          10. -
          -

          Congratulations! You have successfully installed callBOX v5.5 on your device.

          -

          Record and Hide Your Phone Calls with callBOX v5.5

          -

          callBOX v5.5 has a simple and intuitive interface that lets you record and hide your phone calls with ease. Here are some of the features you can use:

          -
            -
          • Automatic Call Recording: You can set the app to record all incoming and outgoing calls, or only specific contacts or numbers. You can also exclude certain contacts or numbers from recording.
          • -
          • Manual Call Recording: You can start or stop recording any call manually by tapping on the floating widget on your screen.
          • -
          • Call Recording Quality: You can choose the audio format and quality of your recordings, such as MP3, WAV, AMR, or AAC.
          • -
          • Cloud Storage: You can sync your recordings to your Google Drive or Dropbox account for backup and easy access.
          • -
          • Call Log: You can view, play, delete, share, or lock your recordings from the app's call log. You can also add notes or labels to your recordings for easy identification.
          • -
          • Hide App Icon: You can hide the app icon from your launcher and access it with a secret code or gesture. You can also change the app name and icon to disguise it as another app.
          • -
          -

          With callBOX v5.5, you can record and hide your phone calls with confidence and convenience. Download the app today and enjoy its premium features for free!

          - -

          Why You Need callBOX v5.5

          -

          There are many reasons why you might want to record and hide your phone calls. For example, you might want to:

          -
            -
          • Keep a record of important conversations, such as business deals, legal agreements, or customer service interactions.
          • -
          • Collect evidence of harassment, fraud, or abuse.
          • -
          • Protect yourself from false accusations or lawsuits.
          • -
          • Monitor your children's or employees' phone activities.
          • -
          • Remember what you said or heard during a call.
          • -
          -

          Whatever your reason, callBOX v5.5 can help you record and hide your phone calls with ease and security. You can trust that your recordings are safe and private, and that no one can access them without your permission.

          -

          What Users Say About callBOX v5.5

          -

          callBOX v5.5 is one of the most popular and trusted automatic call recorder apps on the market. It has over 10 million downloads and a 4.5-star rating on the Google Play Store. Here are some of the reviews from satisfied users:

          -

          -
          "This app is amazing! It records all my calls automatically and saves them to my Google Drive. I can also hide the app icon and access it with a secret code. It's very useful for my work and personal life."
          -
          "I love this app! It helps me keep track of my conversations with my clients and colleagues. I can also add notes and labels to my recordings for easy reference. The audio quality is very good and the app is easy to use."
          -
          "This app is a lifesaver! It saved me from a scammer who tried to trick me into giving him money. I recorded the call and reported him to the authorities. The app also hides itself from my launcher so no one can see it."
          -

          As you can see, callBOX v5.5 is a must-have app for anyone who wants to record and hide their phone calls. Don't miss this opportunity to get the latest version of the app for free from our website. Download callBOX v5.5 now and enjoy its premium features!

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Baidu Pc Faster Portablel.md b/spaces/stomexserde/gpt4-ui/Examples/Baidu Pc Faster Portablel.md deleted file mode 100644 index df4d916e09daf62b3b2bca5385c90688632adfb3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Baidu Pc Faster Portablel.md +++ /dev/null @@ -1,93 +0,0 @@ -
          -

          Baidu PC Faster Portable: A Comprehensive Review

          -

          If you are looking for a way to speed up your PC, clean up junk files, and protect your privacy, you might have heard of Baidu PC Faster. But did you know that there is also a portable version of this popular optimization tool? In this article, we will review Baidu PC Faster Portable, a lightweight and convenient software that you can run from a USB drive or any other removable device. We will cover the following topics:

          -

          Baidu Pc Faster Portablel


          DOWNLOAD ->>> https://urlgoal.com/2uI5Kv



          -
            -
          • What is Baidu PC Faster Portable?
          • -
          • How to download and use Baidu PC Faster Portable?
          • -
          • How does Baidu PC Faster Portable compare to other PC optimization tools?
          • -
          • Conclusion
          • -
          • FAQs
          • -
          -

          By the end of this article, you will have a clear idea of what Baidu PC Faster Portable can do for your PC and whether it is worth trying. Let's get started!

          -

          What is Baidu PC Faster Portable?

          -

          A brief introduction to Baidu PC Faster

          -

          Baidu PC Faster is a full set of system utilities designed to speed up your PC and keep it clean, healthy, and running at peak performance. It also helps users recover available space on hard drives, improves boot time, and protects user privacy. It is developed by Baidu, a Chinese internet giant that also offers a web browser, a search engine, and an antivirus program.

          -

          Baidu PC Faster has four cleaning modes and covers over 300 cleaning checkpoints. It also has a SpeedUp function that optimizes your system settings, startup items, and network configuration. Moreover, it has a Cloud Scan feature that detects and removes malware threats with the help of certified antivirus organizations. It also has a Toolbox that provides various tools to fix common problems, such as file shredder, registry repair, disk defragmenter, and more. Finally, it has a PC App Store that allows you to download and update popular software with one click.

          -

          The features and benefits of Baidu PC Faster Portable

          -

          Baidu PC Faster Portable is a special version of Baidu PC Faster that does not require installation. You can simply download it from the official website and save it on a USB drive or any other removable device. Then, you can plug it into any computer and run it without leaving any traces or affecting the system registry.

          -

          -

          Baidu PC Faster Portable has the same features as the regular version, except for the Cloud Scan and the PC App Store. It can scan your computer for junk files, registry errors, privacy risks, and performance issues. It can also optimize your system settings, startup items, network configuration, and game performance. It can also provide various tools to fix common problems, such as file shredder, registry repair, disk defragmenter, and more.

          -

          The main benefits of using Baidu PC Faster Portable are:

          -
            -
          • It is lightweight and convenient. You don't need to install anything on your computer or worry about compatibility issues. You can carry it with you wherever you go and use it on any computer.
          • -
          • It is fast and efficient. You don't need to wait for the installation process or the updates. You can run it directly from your device and scan your computer in minutes. It can also free up more disk space and improve your PC performance.
          • -
          • It is safe and secure. You don't need to worry about malware infections or privacy leaks. It does not modify your system registry or leave any traces on your computer. It also has a file shredder that can permanently delete sensitive files and a registry repair that can fix corrupted entries.
          • -
          -

          How to download and use Baidu PC Faster Portable?

          -

          The steps to download Baidu PC Faster Portable from the official website

          -

          To download Baidu PC Faster Portable, you need to follow these steps:

          -
            -
          1. Go to the official website of Baidu PC Faster at http://www.pcfaster.com/en/.
          2. -
          3. Click on the "Download" button at the top right corner of the homepage.
          4. -
          5. On the download page, scroll down and find the "Portable Version" section.
          6. -
          7. Click on the "Download Now" button under the "Portable Version" section.
          8. -
          9. Save the file "BaiduPCFasterPortable.zip" on your USB drive or any other removable device.
          10. -
          -

          The steps to run Baidu PC Faster Portable on your PC

          -

          To run Baidu PC Faster Portable on your PC, you need to follow these steps:

          -
            -
          1. Plug your USB drive or any other removable device that contains Baidu PC Faster Portable into your PC.
          2. -
          3. Open the file "BaiduPCFasterPortable.zip" and extract its contents to a folder on your device.
          4. -
          5. Open the folder and double-click on the file "BaiduPCFaster.exe" to launch Baidu PC Faster Portable.
          6. -
          7. Click on the "Agree" button to accept the terms of service and privacy policy.
          8. -
          9. Wait for Baidu PC Faster Portable to load and scan your PC automatically.
          10. -
          -

          The main interface and functions of Baidu PC Faster Portable

          -

          The main interface of Baidu PC Faster Portable consists of four sections: Home, SpeedUp, Clean, and Toolbox. Each section has different functions that you can use to optimize your PC. Here is a brief overview of each section:

          - - - - - - -
          SectionFunction
          HomeThis is where you can see the overall health status of your PC, such as CPU usage, memory usage, disk usage, and network speed. You can also see the number of issues detected by Baidu PC Faster Portable, such as junk files, registry errors, privacy risks, and performance issues. You can click on the "Fix All" button to fix all the issues with one click, or you can click on the "Details" button to see more details and choose which issues to fix.
          SpeedUpThis is where you can optimize your system settings, startup items, network configuration, and game performance. You can click on the "SpeedUp Now" button to apply the recommended optimizations with one click, or you can click on the "Advanced Settings" button to customize your own optimizations. You can also see the speedup score of your PC before and after the optimizations.
          CleanThis is where you can clean up junk files, registry errors, privacy risks, and performance issues. You can click on the "Clean Now" button to clean up all the items with one click, or you can click on the "Advanced Settings" button to choose which items to clean. You can also see how much disk space you can free up by cleaning up these items.
          ToolboxThis is where you can access various tools to fix common problems, such as file shredder, registry repair, disk defragmenter, driver updater, software uninstaller, and more. You can click on any tool to launch it and follow the instructions to use it.
          -

          How does Baidu PC Faster Portable compare to other PC optimization tools?

          -

          The advantages of Baidu PC Faster Portable over other tools

          -

          Baidu PC Faster Portable has some advantages over other PC optimization tools, such as:

          -
            -
          • It is portable and convenient. You don't need to install anything on your computer or worry about compatibility issues. You can carry it with you wherever you go and use it on any computer.
          • -
          • It is fast and efficient. You don't need to wait for the installation process or the updates. You can run it directly from your device and scan your computer in minutes. It can also free up more disk space and improve your PC performance.
          • -
          • It is safe and secure. You don't need to worry about malware infections or privacy leaks. It does not modify your system registry or leave any traces on your computer. It also has a file shredder that can permanently delete sensitive files and a registry repair that can fix corrupted entries.
          • -
          • It is comprehensive and versatile. It has a full set of system utilities that can cover over 300 cleaning checkpoints and optimize various aspects of your PC. It also has a toolbox that provides various tools to fix common problems, such as file shredder, registry repair, disk defragmenter, and more.
          • -
          -

          The disadvantages or limitations of Baidu PC Faster Portable

          -

          Baidu PC Faster Portable also has some disadvantages or limitations compared to other PC optimization tools, such as:

          -
            -
          • It does not have a cloud scan feature. Unlike the regular version of Baidu PC Faster, the portable version does not have a cloud scan feature that can detect and remove malware threats with the help of certified antivirus organizations. This means that you might need to use another antivirus program to protect your PC from viruses, spyware, ransomware, and other malicious software.
          • -
          • It does not have a PC app store feature. Unlike the regular version of Baidu PC Faster, the portable version does not have a PC app store feature that allows you to download and update popular software with one click. This means that you might need to use another software updater or downloader to keep your software up to date and secure.
          • -
          • It might not be compatible with some PCs or devices. Although Baidu PC Faster Portable claims to support Windows XP, Vista, 7, 8, 8.1, and 10, it might not work well on some PCs or devices due to different hardware configurations, system settings, or software conflicts. You might encounter some errors, crashes, or performance issues when using Baidu PC Faster Portable on some PCs or devices.
          • -
          -

          The user feedback and ratings of Baidu PC Faster Portable

          -

          Baidu PC Faster Portable has received mixed feedback and ratings from users who have tried it. On the official website of Baidu PC Faster, the portable version has a rating of 4.5 out of 5 stars based on 1,386 votes. However, on other websites or platforms, such as CNET Download, Softpedia, MajorGeeks, and PortableApps.com, the portable version has lower ratings ranging from 2 to 4 stars out of 5 based on fewer votes.

          -

          Some users have praised Baidu PC Faster Portable for its portability, convenience, speed, efficiency, safety, security, comprehensiveness, and versatility. They have reported that Baidu PC Faster Portable has helped them clean up junk files, optimize system settings, improve PC performance, fix common problems, and protect their privacy.

          -

          However, some users have criticized Baidu PC Faster Portable for its lack of cloud scan feature, lack of PC app store feature, incompatibility with some PCs or devices, errors, crashes, performance issues, malware infections, privacy leaks, and poor customer service. They have reported that Baidu PC Faster Portable has failed to scan or clean their PC, caused system instability or damage, installed unwanted software or toolbars, exposed their personal information or browsing history, and ignored their complaints or feedback.

          -

          Conclusion

          -

          A summary of the main points and recommendations

          -

          Baidu PC Faster Portable is a portable version of Baidu PC Faster, a popular optimization tool that can speed up your PC and keep it clean, healthy, and running at peak performance. It has the same features as the regular version, except for the cloud scan and the PC app store. It can scan your computer for junk files, registry errors, privacy risks, and performance issues. It can also optimize your system settings, startup items, network configuration, and game performance. It can also provide various tools to fix common problems, such as file shredder, registry repair, disk defragmenter, and more.

          -

          Baidu PC Faster Portable has some advantages over other PC optimization tools, such as portability, convenience, speed, efficiency, safety, security, comprehensiveness, and versatility. However, it also has some disadvantages or limitations compared to other PC optimization tools, such as lack of cloud scan feature, lack of PC app store feature, incompatibility with some PCs or devices, errors, crashes, performance issues, malware infections, privacy leaks, and poor customer service.

          -

          Baidu PC Faster Portable has received mixed feedback and ratings from users who have tried it. Some users have praised it for its benefits and results. However, some users have criticized it for its drawbacks and problems.

          -

          Therefore, our recommendation is to try Baidu PC Faster Portable at your own risk. You might find it useful and effective for your PC optimization needs. However, you might also encounter some issues or challenges that might affect your PC performance or security. You should always backup your important files and data before using any optimization tool. You should also use a reliable antivirus program to protect your PC from malware threats. You should also read the terms of service and privacy policy of Baidu PC Faster Portable before using it.

          -

          FAQs

          -

          Is Baidu PC Faster Portable safe and reliable?

          -

          Baidu PC Faster Portable claims to be safe and reliable. It does not modify your system registry or leave any traces on your computer. It also has a file shredder that can permanently delete sensitive files and a registry repair that can fix corrupted entries. However, some users have reported that Baidu PC Faster Portable has installed unwanted software or toolbars on their computer without their consent. Some users have also reported that Baidu PC Faster Portable has exposed their personal information or browsing history to third parties. Therefore, you should be careful when using Baidu PC Faster Portable and check its settings and permissions before running it.

          -

          Does Baidu PC Faster Portable support multiple languages?

          -

          Baidu PC Faster Portable supports multiple languages. You can choose your preferred language from the drop-down menu at the top right corner of the main interface. The available languages are: English, Chinese, Thai, Portuguese, Arabic, Spanish, Indonesian, Turkish, Vietnamese, and Russian. You can also help Baidu PC Faster Portable improve its translation quality by clicking on the "Feedback" button and submitting your suggestions or corrections.

          -

          How often does Baidu PC Faster Portable update its database and features?

          -

          Baidu PC Faster Portable updates its database and features regularly. It checks for updates every time you run it from your device. You can also manually check for updates by clicking on the "Update" button at the top right corner of the main interface. You can see the current version and the latest version of Baidu PC Faster Portable on the update page. You can also see the update log that shows the changes and improvements made in each version. You can download and install the latest version of Baidu PC Faster Portable by clicking on the "Download Now" button on the update page.

          -

          Can I customize the settings and preferences of Baidu PC Faster Portable?

          -

          Yes, you can customize the settings and preferences of Baidu PC Faster Portable according to your needs and preferences. You can access the settings menu by clicking on the "Settings" button at the top right corner of the main interface. You can see various options and tabs on the settings menu, such as General, SpeedUp, Clean, Toolbox, Skin, and About. You can change or adjust the options and tabs as you wish. For example, you can choose whether to run Baidu PC Faster Portable at Windows startup, whether to enable automatic cleaning or optimization, whether to enable notifications or reminders, whether to change the skin or theme of Baidu PC Faster Portable, and more.

          -

          How can I contact the support team of Baidu PC Faster Portable?

          -

          If you have any questions, problems, suggestions, or feedback regarding Baidu PC Faster Portable, you can contact the support team of Baidu PC Faster Portable by clicking on the "Feedback" button at the top right corner of the main interface. You can fill in your name, email address, subject, and message on the feedback form. You can also attach a screenshot or a file if necessary. You can then click on the "Submit" button to send your feedback to the support team. You can also visit the official website of Baidu PC Faster at http://www.pcfaster.com/en/ and click on the "Support" button at the bottom of the homepage. You can see various options and links on the support page, such as FAQ, Forum, Online Help, Email Support, and Phone Support. You can choose any option or link that suits your needs and preferences.

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Blitzkrieg Game Download !!BETTER!! Free Full Version.md b/spaces/stomexserde/gpt4-ui/Examples/Blitzkrieg Game Download !!BETTER!! Free Full Version.md deleted file mode 100644 index af4a7c3d169b8b6fd78786d8110493d24fa3390f..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Blitzkrieg Game Download !!BETTER!! Free Full Version.md +++ /dev/null @@ -1,30 +0,0 @@ - -

          How to Download Blitzkrieg Game for Free

          -

          Blitzkrieg is a classic real-time strategy game that lets you experience the World War II from different perspectives. You can command the German Wehrmacht, the British Army, or the United States Army in various campaigns and missions. You can also create your own scenarios with the game's map editor and challenge other players online.

          -

          If you are a fan of historical warfare and tactical gameplay, you might want to download Blitzkrieg game for free. There are several ways to do this, but we will show you one of the easiest and safest methods.

          -

          blitzkrieg game download free full version


          DOWNLOAD ››››› https://urlgoal.com/2uI6uN



          -

          Step 1: Visit Retrolorian Website

          -

          Retrolorian is a website that offers free downloads of retro games for Windows. You can find many titles from different genres and platforms, including Blitzkrieg. To visit Retrolorian website, click here.

          -

          Step 2: Download Blitzkrieg Game

          -

          Once you are on the Retrolorian website, you will see a page with information about Blitzkrieg game, such as its release date, publisher, developer, genre, and platform. You will also see a download button at the bottom of the page. Click on it to start downloading Blitzkrieg game for free.

          -

          The file size is about 1.75 GB, so it might take some time depending on your internet speed. You will need a program like WinRAR or 7-Zip to extract the files after downloading.

          -

          Step 3: Install and Play Blitzkrieg Game

          -

          After extracting the files, you will see a folder named "Blitzkrieg Anthology". This folder contains four games: Blitzkrieg, Blitzkrieg: Burning Horizon, Blitzkrieg: Rolling Thunder, and Blitzkrieg: Iron Division. You can choose which one you want to play by double-clicking on its executable file.

          -

          The games are already pre-installed and patched, so you don't need to do anything else. Just follow the on-screen instructions and enjoy playing Blitzkrieg game for free.

          -

          Conclusion

          -

          Blitzkrieg is a game that will appeal to anyone who likes historical strategy games. It offers a realistic and challenging simulation of WWII battles, with a variety of units, weapons, and tactics. You can download Blitzkrieg game for free from Retrolorian website and play it on your Windows PC.

          -

          -

          We hope this article was helpful and informative. If you have any questions or comments, feel free to leave them below.

          - -

          Blitzkrieg Game Review

          -

          Blitzkrieg game is not only a historical simulation, but also a fun and engaging strategy game. It has many features that make it stand out from other games of its genre. Here are some of them:

          -
            -
          • The game has a flexible campaign structure that allows you to choose your own path and difficulty level. You can either follow the historical events or create your own alternative scenarios.
          • -
          • The game has a realistic and detailed graphics engine that shows the terrain, weather, and lighting effects. You can zoom in and out of the battlefield and see the damage and destruction caused by your actions.
          • -
          • The game has a complex and dynamic AI that adapts to your moves and tactics. The enemy will try to flank you, ambush you, retreat, or counterattack depending on the situation. You will also have to deal with friendly fire, morale, and supply issues.
          • -
          • The game has a large variety of units, weapons, and vehicles that you can use in your missions. You can customize your army with different types of infantry, tanks, artillery, aircraft, and more. You can also upgrade your units with new equipment and skills.
          • -
          • The game has a multiplayer mode that supports up to 16 players online or via LAN. You can play cooperatively or competitively in different modes such as deathmatch, capture the flag, or king of the hill.
          • -
          -

          Blitzkrieg game is a game that will challenge your strategic skills and immerse you in the WWII era. It is a game that deserves a try if you are a fan of war games.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Coco English Hd Full Movie Download 1080p Movies !!LINK!!.md b/spaces/stomexserde/gpt4-ui/Examples/Coco English Hd Full Movie Download 1080p Movies !!LINK!!.md deleted file mode 100644 index b29ceaba4c80e0a6cdecd01cd71f65fe550eacee..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Coco English Hd Full Movie Download 1080p Movies !!LINK!!.md +++ /dev/null @@ -1,23 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Coco English Hd Full Movie Download 1080p Movies": - -

          Coco English Hd Full Movie Download 1080p Movies: How to Watch the Award-Winning Animated Film Online

          -

          Coco is a 2017 animated film produced by Pixar Animation Studios and Walt Disney Pictures. The film tells the story of Miguel, a young boy who dreams of becoming a musician despite his family's ban on music. On the Day of the Dead, Miguel accidentally enters the Land of the Dead, where he meets his ancestors and learns about his heritage.

          -

          Coco was praised by critics and audiences alike for its animation, music, voice acting, and emotional story. It won two Academy Awards for Best Animated Feature and Best Original Song, as well as several other accolades.

          -

          Coco English Hd Full Movie Download 1080p Movies


          Download File ✪✪✪ https://urlgoal.com/2uI7pb



          -

          If you want to watch Coco English Hd Full Movie Download 1080p Movies online, you have several options. Here are some of them:

          -
            -
          • Disney+: Disney's streaming service offers Coco in HD quality for its subscribers. You can also download the movie to watch offline on your device. Disney+ costs $8.99 per month or $89.99 per year in Canada.
          • -
          • Netflix: Netflix Canada has Coco available for streaming in HD quality. You can also download the movie to watch offline on your device. Netflix costs $9.99 per month for the basic plan, $14.99 per month for the standard plan, or $18.99 per month for the premium plan.
          • -
          • Amazon Prime Video: Amazon's streaming service offers Coco for rent or purchase in HD quality. You can rent the movie for $4.99 or buy it for $19.99. You can also download the movie to watch offline on your device. Amazon Prime Video costs $7.99 per month or $79 per year in Canada.
          • -
          • iTunes: Apple's digital store offers Coco for rent or purchase in HD quality. You can rent the movie for $4.99 or buy it for $19.99. You can also download the movie to watch offline on your device.
          • -
          • Google Play: Google's digital store offers Coco for rent or purchase in HD quality. You can rent the movie for $4.99 or buy it for $19.99. You can also download the movie to watch offline on your device.
          • -
          -

          Whichever option you choose, you will enjoy watching Coco English Hd Full Movie Download 1080p Movies online and experience the magic of this beautiful film.

          Here are a few more paragraphs for the article: - -

          Coco is not only a visually stunning film, but also a culturally rich one. The film draws inspiration from the Mexican holiday of Día de los Muertos, or Day of the Dead, which celebrates the lives of the deceased and their connection to the living. The film features many elements of Mexican culture, such as music, food, art, and folklore. The film also showcases the diversity and complexity of Mexican families and their traditions.

          -

          The film also explores themes such as identity, family, dreams, and death. Miguel struggles to find his place in his family and his passion for music. He learns to appreciate his family's history and values, while also pursuing his own goals. He also discovers the meaning of death and how it affects the living and the dead. He realizes that death is not the end, but rather a part of life that can be celebrated and remembered.

          -

          Coco is a film that will make you laugh, cry, and sing along. It is a film that will touch your heart and soul. It is a film that you will want to watch again and again. Coco English Hd Full Movie Download 1080p Movies is a must-see for anyone who loves animation, music, and culture.

          -

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/test_gpt.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/test_gpt.py deleted file mode 100644 index 89dd726a856297ae81fad5b3a8f1cffbd495952d..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/test_gpt.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/4/29 19:47 -@Author : alexanderwu -@File : test_gpt.py -""" - -import pytest - -from metagpt.logs import logger - - -@pytest.mark.usefixtures("llm_api") -class TestGPT: - def test_llm_api_ask(self, llm_api): - answer = llm_api.ask('hello chatgpt') - assert len(answer) > 0 - - # def test_gptapi_ask_batch(self, llm_api): - # answer = llm_api.ask_batch(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - # assert len(answer) > 0 - - def test_llm_api_ask_code(self, llm_api): - answer = llm_api.ask_code(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_aask(self, llm_api): - answer = await llm_api.aask('hello chatgpt') - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_aask_code(self, llm_api): - answer = await llm_api.aask_code(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_costs(self, llm_api): - await llm_api.aask('hello chatgpt') - costs = llm_api.get_costs() - logger.info(costs) - assert costs.total_cost > 0 diff --git a/spaces/sunil448832/retrieval-augment-generation/chat.py b/spaces/sunil448832/retrieval-augment-generation/chat.py deleted file mode 100644 index ff300e2bb237a9f8ab568c09a2cc920cabb47993..0000000000000000000000000000000000000000 --- a/spaces/sunil448832/retrieval-augment-generation/chat.py +++ /dev/null @@ -1,76 +0,0 @@ -from models import EmbeddingModel, LLM -from utils import MistralPrompts -from vector_store import FaissVectorStore -import argparse - -import warnings -warnings.filterwarnings("ignore") - -# Create a ChatBot class to manage interactions -class ChatBot: - def __init__(self, llm, embedding_model, vector_store): - self.llm = llm - self.embedding_model = embedding_model - self.chat_history = [] - self.vector_store = vector_store - - def format_context(self, retrieved_documents): - context, sources = '', '' - - # Format retrieved documents into context and sources - # This is simplest way to combine. there are other techniques as well to try out. - for doc in retrieved_documents: - context += doc.text + '\n\n' - sources += str(doc.metadata) + '\n' - - return context, sources - - def chat(self, question): - if len(self.chat_history): - # Create a prompt based on chat history - chat_history_prompt = MistralPrompts.create_history_prompt(self.chat_history) - standalone_question_prompt = MistralPrompts.create_standalone_question_prompt(question, chat_history_prompt) - standalone_question = self.llm.generate_response(standalone_question_prompt) - else: - chat_history_prompt = '' - standalone_question = question - - # Encode the question using the embedding model - query_embedding = self.embedding_model.encode(standalone_question) - - # Retrieve documents related to the question - retrieved_documents = self.vector_store.query(query_embedding, 3) - context, sources = self.format_context(retrieved_documents) - - # Print information about retrieved documents - print("Retrieved documents info: \n", sources) - - # Create a prompt and generate a response - prompt = MistralPrompts.create_question_prompt(question, context, chat_history_prompt) - response = self.llm.generate_response(prompt) - - # Extract the response and update chat history - response = MistralPrompts.extract_response(response) - self.chat_history.append((question, response)) - return response - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--vector_database_path", default='vector_db',help="Vector database which store embeddings vector") - args = parser.parse_args() - - VECTOR_DATABASE_PATH = parser.vector_database_path - # Initialize models and vector store - embedding_model = EmbeddingModel(model_name='sentence-transformers/all-MiniLM-L6-v2') - llm = LLM("mistralai/Mistral-7B-Instruct-v0.1") - vector_store = FaissVectorStore.as_retriever(database_path=VECTOR_DATABASE_PATH) - - # Create a ChatBot instance - chat_bot = ChatBot(llm, embedding_model, vector_store) - - # Start the conversation - print("Assistant Bot: Hello, I'm the Assistant Bot! How may I assist you today?") - while True: - question = input("User:") - response = chat_bot.chat(question) - print("Assistant Bot:", response, '\n') diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FSX FS9 Zinertek World Environment 2007 Lucky Patcher.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FSX FS9 Zinertek World Environment 2007 Lucky Patcher.md deleted file mode 100644 index ee8eec39274c4537b59df1de5afa259d9895f7a7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/FSX FS9 Zinertek World Environment 2007 Lucky Patcher.md +++ /dev/null @@ -1,6 +0,0 @@ -

          FSX FS9 Zinertek World Environment 2007 Lucky Patcher


          DOWNLOAD ––– https://cinurl.com/2uEYlN



          - -. coub.com/stories/3142531-fsx-fs9-zinertek-world-environment-2007-lucky-patcher Download - https://mega.nz/#!DnJK3LlR!vNk3txJd6fM8iBxN2W_nWI_M_wYzEw1L3jOUH3dqUt9Zzc6qgZfI5jmB_GZeZ1B6tJk2dHwTbSbE7XuU8tQ9jbKuOJi1K_RrMZz4JHw-TQF 8a78ff9644
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Sharing Pack V0.9.6 Startimes UPDATED.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Sharing Pack V0.9.6 Startimes UPDATED.md deleted file mode 100644 index 382cb89bc9a12026c94f8896c38675bbc85bc1ad..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Windows Sharing Pack V0.9.6 Startimes UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Windows sharing pack v0.9.6 startimes


          Downloadhttps://cinurl.com/2uEXM6



          - -aver v1.9.9.1 Multigrafix NixOS 2016 startimesaver v1.9.9.1 Multigrafix NixOS 2016 startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 18.04 [Ubuntu Studio 18.04] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ubuntu 16.10] startimesaver v1.9.9.1 Multigrafix NixOS 2016 Ubuntu 16.04 [Ub 4fefd39f24
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adjustment Program - Reset Impressora Epson TX200-TX210 ECC (Luzes Piscando).rar BEST.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adjustment Program - Reset Impressora Epson TX200-TX210 ECC (Luzes Piscando).rar BEST.md deleted file mode 100644 index 0d0cd2be764ea6396d5b2eb4071acdc2da1c87ac..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adjustment Program - Reset Impressora Epson TX200-TX210 ECC (Luzes Piscando).rar BEST.md +++ /dev/null @@ -1,20 +0,0 @@ - -

          How to Reset Epson TX200-TX210 Printers with Adjustment Program

          -

          If you have an Epson TX200 or TX210 printer that is displaying error messages or flashing lights, you may need to reset it using an adjustment program. An adjustment program is a software tool that can reset the waste ink counter, clear the print head nozzles, and perform other maintenance tasks on your printer. In this article, we will show you how to download and use the adjustment program for Epson TX200-TX210 printers.

          -

          Adjustment Program - Reset Impressora Epson TX200-TX210 ECC (Luzes Piscando).rar


          Download Zip ===> https://urluss.com/2uCEdw



          -

          Step 1: Download the Adjustment Program

          -

          The adjustment program for Epson TX200-TX210 printers is a compressed file with the extension .rar. You can download it from this link: https://www.4shared.com/rar/9yA1rTqG/Adjustment_Program_-_Reset_Imp.html. You will need a 4shared account to access the file. If you don't have one, you can create one for free.

          -

          Step 2: Extract the Adjustment Program

          -

          After downloading the file, you will need to extract it using a software that can handle .rar files, such as WinRAR or 7-Zip. You can download WinRAR from this link: https://www.win-rar.com/download.html. You can download 7-Zip from this link: https://www.7-zip.org/download.html. To extract the file, right-click on it and select "Extract Here" or "Extract to Adjustment Program - Reset Impressora Epson TX200-TX210 ECC (Luzes Piscando)" depending on your software.

          -

          Step 3: Run the Adjustment Program

          -

          After extracting the file, you will see a folder with the same name as the file. Open the folder and double-click on the file named "AdjProg.exe". This will launch the adjustment program. You may see a warning message from your antivirus software or Windows Defender. This is normal and you can ignore it by clicking on "Run Anyway" or "Allow". The adjustment program is safe and does not contain any viruses or malware.

          -

          Step 4: Select Your Printer Model

          -

          When the adjustment program opens, you will see a window with several options. Click on the button that says "Select". This will open another window where you can choose your printer model. Select "TX200" or "TX210" depending on your printer and click on "OK".

          -

          -

          Step 5: Choose Your Adjustment Mode

          -

          After selecting your printer model, you will see another window with two options: "Particular Adjustment Mode" and "Maintenance". Click on the button that says "Particular Adjustment Mode". This will open another window where you can choose the type of adjustment you want to perform on your printer.

          -

          Step 6: Reset Your Printer

          -

          In the window that opens, you will see a list of adjustment functions. The most common one is "Waste Ink Pad Counter", which resets the counter that measures how much ink has been used by your printer. If your printer is displaying an error message that says "A printer's ink pad is at the end of its service life" or "Parts inside your printer are near the end of their service life", you need to reset this counter. To do so, click on "Waste Ink Pad Counter" and then click on "OK". This will open another window where you can check and reset the counter.

          -

          In this window, click on "Check" to see the current value of the counter. It will show a percentage that indicates how much ink has been used by your printer. If it is close to or above 100%, you need to reset it. To do so, click on "Initialization" and then click on "OK". This will reset the counter to zero and clear the error message. You may see a message that says "Please turn off printer". If so, turn off your printer and then turn it back on

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Alien 1979 Directors Cut 720p Or 1080p UPD.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Alien 1979 Directors Cut 720p Or 1080p UPD.md deleted file mode 100644 index 9875e8d1f091f5f1ee826cf1c38fcf641ea6f74e..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Alien 1979 Directors Cut 720p Or 1080p UPD.md +++ /dev/null @@ -1,13 +0,0 @@ -

          Alien 1979 Directors Cut 720p Or 1080p


          Downloadhttps://urluss.com/2uCEVt



          -
          -ALIEN (1979) | movie trailer | Full HD | 1080p. 1,304 views - Jan 26, 2020 - After a space merchant ship ... Alien (Eng. -Alien) is a 1979 American science fiction horror film. -A film adaptation of the novel of the same name by Philip K. Dick. -The plot of the film: After a space merchant ship ... -In 1978, the film "Alien" was released, telling the story of ... Critics reviews and ratings and reviews of Alien (1979) on Allocine. -Critics reviews and ratings and reviews of Alien (1979) on the Allocine website. -Alien (1979) - everything about the film: release date, trailers, photos, actors. -Reviews ... 8a78ff9644
          -
          -
          -

          diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/_functions.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/_functions.py deleted file mode 100644 index 9b5a8a44483ab991411d07122b22a1d027e4be8e..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/spaces/t13718236382/bingoGPT4/src/components/ui/codeblock.tsx b/spaces/t13718236382/bingoGPT4/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
          -
          - {language} -
          - - -
          -
          - - {value} - -
          - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/talhaty/Faceswapper/roop/utilities.py b/spaces/talhaty/Faceswapper/roop/utilities.py deleted file mode 100644 index 90c8d981f5f159a459ca0c08cc23dfac8d04c068..0000000000000000000000000000000000000000 --- a/spaces/talhaty/Faceswapper/roop/utilities.py +++ /dev/null @@ -1,141 +0,0 @@ -import glob -import mimetypes -import os -import platform -import shutil -import ssl -import subprocess -import urllib -from pathlib import Path -from typing import List, Any -from tqdm import tqdm - -import roop.globals - -TEMP_FILE = 'temp.mp4' -TEMP_DIRECTORY = 'temp' - -# monkey patch ssl for mac -if platform.system().lower() == 'darwin': - ssl._create_default_https_context = ssl._create_unverified_context - - -def run_ffmpeg(args: List[str]) -> bool: - commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', roop.globals.log_level] - commands.extend(args) - try: - subprocess.check_output(commands, stderr=subprocess.STDOUT) - return True - except Exception: - pass - return False - - -def detect_fps(target_path: str) -> float: - command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path] - output = subprocess.check_output(command).decode().strip().split('/') - try: - numerator, denominator = map(int, output) - return numerator / denominator - except Exception: - pass - return 30.0 - - -def extract_frames(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')]) - - -def create_video(target_path: str, fps: float = 30.0) -> None: - temp_output_path = get_temp_output_path(target_path) - temp_directory_path = get_temp_directory_path(target_path) - run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', roop.globals.video_encoder, '-crf', str(roop.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path]) - - -def restore_audio(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path]) - if not done: - move_temp(target_path, output_path) - - -def get_temp_frame_paths(target_path: str) -> List[str]: - temp_directory_path = get_temp_directory_path(target_path) - return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png'))) - - -def get_temp_directory_path(target_path: str) -> str: - target_name, _ = os.path.splitext(os.path.basename(target_path)) - target_directory_path = os.path.dirname(target_path) - return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name) - - -def get_temp_output_path(target_path: str) -> str: - temp_directory_path = get_temp_directory_path(target_path) - return os.path.join(temp_directory_path, TEMP_FILE) - - -def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Any: - if source_path and target_path: - source_name, _ = os.path.splitext(os.path.basename(source_path)) - target_name, target_extension = os.path.splitext(os.path.basename(target_path)) - if os.path.isdir(output_path): - return os.path.join(output_path, source_name + '-' + target_name + target_extension) - return output_path - - -def create_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - Path(temp_directory_path).mkdir(parents=True, exist_ok=True) - - -def move_temp(target_path: str, output_path: str) -> None: - temp_output_path = get_temp_output_path(target_path) - if os.path.isfile(temp_output_path): - if os.path.isfile(output_path): - os.remove(output_path) - shutil.move(temp_output_path, output_path) - - -def clean_temp(target_path: str) -> None: - temp_directory_path = get_temp_directory_path(target_path) - parent_directory_path = os.path.dirname(temp_directory_path) - if not roop.globals.keep_frames and os.path.isdir(temp_directory_path): - shutil.rmtree(temp_directory_path) - if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path): - os.rmdir(parent_directory_path) - - -def has_image_extension(image_path: str) -> bool: - return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp')) - - -def is_image(image_path: str) -> bool: - if image_path and os.path.isfile(image_path): - mimetype, _ = mimetypes.guess_type(image_path) - return bool(mimetype and mimetype.startswith('image/')) - return False - - -def is_video(video_path: str) -> bool: - if video_path and os.path.isfile(video_path): - mimetype, _ = mimetypes.guess_type(video_path) - return bool(mimetype and mimetype.startswith('video/')) - return False - - -def conditional_download(download_directory_path: str, urls: List[str]) -> None: - if not os.path.exists(download_directory_path): - os.makedirs(download_directory_path) - for url in urls: - download_file_path = os.path.join(download_directory_path, os.path.basename(url)) - if not os.path.exists(download_file_path): - request = urllib.request.urlopen(url) # type: ignore[attr-defined] - total = int(request.headers.get('Content-Length', 0)) - with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress: - urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined] - - -def resolve_relative_path(path: str) -> str: - return os.path.abspath(os.path.join(os.path.dirname(__file__), path)) diff --git a/spaces/terfces0erbo/CollegeProjectV2/Cubase 6 Full [BEST] Version Free Download 25.md b/spaces/terfces0erbo/CollegeProjectV2/Cubase 6 Full [BEST] Version Free Download 25.md deleted file mode 100644 index e0c8c9d7359a486bc7fe5ade1f7825c3a7473d10..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Cubase 6 Full [BEST] Version Free Download 25.md +++ /dev/null @@ -1,13 +0,0 @@ -

          cubase 6 full version free download 25


          Download Zip >>>>> https://bytlly.com/2uGiY0



          -
          -November 24, 2021 - Download Cubasis LE 2 and enjoy it on your iPhone, iPad, ... Version 2.8.6 ... I I can't upgrade to the full version. Is it possible -Cubase LE is a professional software for creating music on a computer, ... -Cubase LE 4 is Steinberg's version of the popular professional audio editor. ... -Cubase LE2 is a professional music production software for .... -Has anyone compared Cubase LE vs Cubase LE2? -If yes -Cubase LE 4 is the professional version of the famous music creation tool. ... -Cubase LE 5 is the professional version of the popular music editor from Steinberg. ... 8a78ff9644
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Jis B 1012 Pdf Downloadl NEW!.md b/spaces/terfces0erbo/CollegeProjectV2/Jis B 1012 Pdf Downloadl NEW!.md deleted file mode 100644 index 7ec491c135d8aaddf09bb183560d75fdc3f8dd96..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Jis B 1012 Pdf Downloadl NEW!.md +++ /dev/null @@ -1,86 +0,0 @@ -
          -

          What is Jis B 1012 Pdf Downloadl and Why You Need It

          -

          If you are working with screws and cross recesses, you may have heard of Jis B 1012 Pdf Downloadl. Jis B 1012 Pdf Downloadl is a document that contains the Japanese Industrial Standard (JIS) for cross recesses for screws. Cross recesses are a type of screw drive that have a cross-shaped slot or groove on the head of the screw, allowing a screwdriver or a bit to fit snugly and securely into the recess and transmit torque to the screw. Cross recesses are widely used in various applications and industries because they offer several advantages over other types of screw drives, such as better grip, higher torque, less wear and damage, and compatibility with different tools.

          -

          Jis B 1012 Pdf Downloadl


          Download ->>> https://bytlly.com/2uGkNm



          -

          However, cross recesses also have some drawbacks and limitations, such as requiring precise matching between the screwdriver and the screw size and type, being difficult to remove if over-tightened or corroded, being easily damaged by improper use or poor quality tools. To ensure optimal performance and quality of cross recesses for screws, it is important to follow the specifications and guidelines provided by Jis B 1012 Pdf Downloadl. Jis B 1012 Pdf Downloadl covers various aspects of cross recesses for screws, such as:

          -
            -
          • The classification and designation of cross recess types, such as Type A (commonly known as Phillips), Type B (commonly known as Pozidriv), Type C (commonly known as Supadriv) and Type D (commonly known as JIS).
          • -
          • The dimensions and tolerances of cross recesses for different sizes and types of screws, such as M1.6 to M10 for Type A, M1.6 to M8 for Type B, M1.6 to M6 for Type C and M1.6 to M5 for Type D.
          • -
          • The measuring method and instruments for cross recesses, such as optical projectors, profile projectors, dial indicators and gauges.
          • -
          • The quality requirements and inspection methods for cross recesses, such as visual inspection, dimensional inspection, functional inspection and hardness inspection.
          • -
          -

          How to Download Jis B 1012 Pdf Downloadl for Free

          -

          If you want to download Jis B 1012 Pdf Downloadl for free, you will need to find a website or a platform that offers this document in PDF format. However, this may not be an easy task, as Jis B 1012 Pdf Downloadl is a copyrighted document that is not freely available on the internet. You may have to pay a fee or register an account to access Jis B 1012 Pdf Downloadl from some sources. You may also encounter some risks or challenges when downloading Jis B 1012 Pdf Downloadl from some sources, such as:

          -
            -
          • You may download a fake or corrupted file that does not contain Jis B 1012 Pdf Downloadl or contains malware or viruses that can harm your device or steal your information.
          • -
          • You may download an outdated or incomplete version of Jis B 1012 Pdf Downloadl that does not reflect the latest revisions or amendments.
          • -
          • You may download an unauthorized or illegal version of Jis B 1012 Pdf Downloadl that violates the intellectual property rights of the original publisher or author.
          • -
          • You may download a version of Jis B 1012 Pdf Downloadl that is not compatible with your device or software or that has poor quality or readability.
          • -
          -

          To avoid these risks or challenges, you should download Jis B 1012 Pdf Downloadl from a trusted source that has positive reviews and feedback from other users. You can use the links provided in this article or search for other sources on the internet. To download Jis B 1012 Pdf Downloadl from these sources, you need to follow these steps:

          -
            -
          1. Click on the link or enter the URL of the source in your browser.
          2. -
          3. Find the download button or link on the webpage and click on it.
          4. -
          5. Wait for a few seconds or minutes until the file is ready to be downloaded.
          6. -
          7. Save the file on your device or open it with your PDF reader software.
          8. -
          -

          What are the Benefits of Downloading Jis B 1012 Pdf Downloadl

          -

          Downloading Jis B 1012 Pdf Downloadl can offer you several benefits if you are interested in cross recesses for screws or related topics. Some of these benefits are:

          -
            -
          • You can access Jis B 1012 Pdf Downloadl anytime and anywhere without needing an internet connection or a physical copy.
          • -
          • You can save time and money by downloading Jis B 1012 Pdf Downloadl instead of buying it from a bookstore or ordering it online.
          • -
          • You can learn more about cross recesses for screws and improve your knowledge and skills in this field.
          • -
          • You can use Jis B 1012 Pdf Downloadl as a reference or a guide for your projects or tasks involving cross recesses for screws.
          • -
          - -

          Conclusion

          - -

          Jis B 1012 Pdf Downloadl is a document that contains the Japanese Industrial Standard (JIS) for cross recesses for screws. It is a useful reference for engineers, designers, manufacturers and users of screws and related products. Cross recesses are a type of screw drive that have a cross-shaped slot or groove on the head of the screw, allowing a screwdriver or a bit to fit snugly and securely into the recess and transmit torque to the screw. Cross recesses are widely used in various applications and industries because they offer several advantages over other types of screw drives, such as better grip, higher torque, less wear and damage, and compatibility with different tools. However, cross recesses also have some drawbacks and limitations, such as requiring precise matching between the screwdriver and the screw size and type, being difficult to remove if over-tightened or corroded, being easily damaged by improper use or poor quality tools.

          -

          - -

          If you want to download Jis B 1012 Pdf Downloadl for free from the internet, you should follow some precautions and steps to ensure that you get a genuine and high-quality version of this document from a reliable source. You should also make use of this document wisely and responsibly by respecting

          -

          the intellectual property rights of the original publisher or author. You should also follow the specifications and guidelines provided by Jis B 1012 Pdf Downloadl when working with cross recesses for screws to ensure optimal performance and quality.

          - -

          Jis B 1012 Pdf Downloadl is a valuable document that can help you understand and apply cross recesses for screws in various situations. If you want to download Jis B 1012 Pdf Downloadl for free from the internet, you should follow some precautions and steps to ensure that you get a genuine and high-quality version of this document from a reliable source. You should also make use of this document wisely and responsibly by respecting the intellectual property rights of the original publisher or author. You should also follow the specifications and guidelines provided by Jis B 1012 Pdf Downloadl when working with cross recesses for screws to ensure optimal performance and quality.

          -

          How to Use Jis B 1012 Pdf Downloadl as a Reference or a Guide

          -

          If you have downloaded Jis B 1012 Pdf Downloadl and want to use it as a reference or a guide for your projects or tasks involving cross recesses for screws, you should follow these steps:

          -
            -
          1. Open Jis B 1012 Pdf Downloadl with your PDF reader software and navigate to the section or page that contains the information you need.
          2. -
          3. Read and understand the information carefully and make sure it matches your situation and requirements.
          4. -
          5. Apply the information to your project or task by following the specifications and guidelines provided by Jis B 1012 Pdf Downloadl.
          6. -
          7. Check your work and make sure it complies with Jis B 1012 Pdf Downloadl and meets your expectations and goals.
          8. -
          -

          Jis B 1012 Pdf Downloadl can help you with various aspects of cross recesses for screws, such as:

          -
            -
          • Choosing the right type and size of cross recess for your screw and application.
          • -
          • Measuring and identifying the cross recess on your screw head.
          • -
          • Selecting and using the appropriate screwdriver or bit for your cross recess.
          • -
          • Adjusting and applying the correct torque and fastening force to your screw.
          • -
          • Inspecting and maintaining the quality and condition of your cross recess and screw.
          • -
          - -

          How to Learn More About Cross Recesses for Screws

          -

          If you want to learn more about cross recesses for screws and related topics, you can use Jis B 1012 Pdf Downloadl as a starting point. Jis B 1012 Pdf Downloadl provides you with the basic and essential information about cross recesses for screws, but it may not cover all the details and nuances that you may encounter in your projects or tasks. To expand your knowledge and skills in this field, you can use these resources:

          -
            -
          • Other standards and documents that are related to cross recesses for screws, such as ISO 4757, DIN 5260, ANSI B18.6.3, etc.
          • -
          • Books, articles, blogs, podcasts, videos and other media that discuss cross recesses for screws and their applications and benefits.
          • -
          • Courses, workshops, seminars, webinars and other educational programs that teach you about cross recesses for screws and how to use them effectively.
          • -
          • Experts, mentors, peers and other professionals who have experience and expertise in cross recesses for screws and can offer you advice, guidance, feedback and support.
          • -
          - -

          Conclusion

          - -

          Jis B 1012 Pdf Downloadl is a document that contains the Japanese Industrial Standard (JIS) for cross recesses for screws. It is a useful reference for engineers, designers, manufacturers and users of screws and related products. Cross recesses are a type of screw drive that have a cross-shaped slot or groove on the head of the screw, allowing a screwdriver or a bit to fit snugly and securely into the recess and transmit torque to the screw. Cross recesses are widely used in various applications and industries because they offer several advantages over other types of screw drives, such as better grip, higher torque, less wear and damage, and compatibility with different tools. However, cross recesses also have some drawbacks and limitations, such as requiring precise matching between the screwdriver and the screw size and type, being difficult to remove if over-tightened or corroded, being easily damaged by improper use or poor quality tools.

          - -

          If you want to download Jis B 1012 Pdf Downloadl for free from the internet, you should follow some precautions and steps to ensure that you get a genuine and high-quality version of this document from a reliable source. You should also make use of this document wisely and responsibly by respecting -

          the intellectual property rights of the original publisher or author. You should also follow the specifications and guidelines provided by Jis B 1012 Pdf Downloadl when working with cross recesses for screws to ensure optimal performance and quality.

          - -

          Jis B 1012 Pdf Downloadl is a valuable document that can help you understand and apply cross recesses for screws in various situations. If you want to download Jis B 1012 Pdf Downloadl for free from the internet, you should follow some precautions and steps to ensure that you get a genuine and high-quality version of this document from a reliable source. You should also make use of this document wisely and responsibly by respecting the intellectual property rights of the original publisher or author. You should also follow the specifications and guidelines provided by Jis B 1012 Pdf Downloadl when working with cross recesses for screws to ensure optimal performance and quality.

          - -

          If you have downloaded Jis B 1012 Pdf Downloadl and want to use it as a reference or a guide for your projects or tasks involving cross recesses for screws, you should follow some steps to access, read, understand and apply the information contained in this document. You should also check your work and make sure it complies with Jis B 1012 Pdf Downloadl and meets your expectations and goals.

          - -

          If you want to learn more about cross recesses for screws and related topics, you can use Jis B 1012 Pdf Downloadl as a starting point. You can also use other resources such as other standards and documents, books, articles, blogs, podcasts, videos and other media, courses, workshops, seminars, webinars and other educational programs, experts, mentors, peers and other professionals who can offer you more information, knowledge and skills in this field.

          - -

          Cross recesses for screws are a useful and versatile type of screw drive that can help you with various applications and industries. By downloading Jis B 1012 Pdf Downloadl for free from the internet and using it as a reference or a guide, you can improve your understanding and performance of cross recesses for screws and achieve your desired results.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/terrierteam/doc2query/app.py b/spaces/terrierteam/doc2query/app.py deleted file mode 100644 index 9a8453f2ac3975b8554452c14a68b31daeadcba1..0000000000000000000000000000000000000000 --- a/spaces/terrierteam/doc2query/app.py +++ /dev/null @@ -1,169 +0,0 @@ -import pyterrier as pt -pt.init() -import numpy as np -import pandas as pd -import gradio as gr -from pyterrier_doc2query import Doc2Query, QueryScorer, QueryFilter -from pyterrier_dr import ElectraScorer -from pyterrier_gradio import Demo, MarkdownFile, interface, df2code, code2md, EX_D - -MODEL = 'macavaney/doc2query-t5-base-msmarco' -SCORE_MODEL = 'crystina-z/monoELECTRA_LCE_nneg31' -PERCENTILES_BY_5 = np.array([-3.80468750e+00, -2.21679688e+00, -1.25683594e+00, -5.58105469e-01, -7.65323639e-04, 4.69482422e-01, 8.83300781e-01, 1.25878906e+00, 1.61035156e+00, 1.94335938e+00, 2.26562500e+00, 2.58007812e+00, 2.89648438e+00, 3.21484375e+00, 3.54687500e+00, 3.90039062e+00, 4.30078125e+00, 4.77343750e+00, 5.37109375e+00]) -COLORS = ['rgb(252, 132, 100)','rgb(252, 148, 116)','rgb(252, 166, 137)','rgb(252, 183, 156)','rgb(253, 200, 178)','rgb(254, 215, 198)','rgb(255, 228, 216)','rgb(255, 237, 228)','rgb(256, 245, 240)','rgb(256, 256, 256)','rgb(247, 252, 245)','rgb(240, 250, 237)','rgb(233, 247, 228)','rgb(222, 242, 216)','rgb(209, 237, 203)','rgb(195, 232, 188)','rgb(180, 225, 173)','rgb(163, 218, 157)','rgb(145, 210, 142)','rgb(125, 201, 126)'] - -doc2query = Doc2Query(MODEL, append=True, num_samples=5) -electra = ElectraScorer() -query_scorer = QueryScorer(electra) -query_filter = QueryFilter(t=0, append=False) - -COLAB_NAME = 'pyterrier_doc2query.ipynb' -COLAB_INSTALL = ''' -!pip install -q git+https://github.com/terrier-org/pyterrier -!pip install -q git+https://github.com/terrierteam/pyterrier_doc2query -'''.strip() -COLAB_INSTALL_MM = COLAB_INSTALL + '\n!pip install -q git+https://github.com/terrierteam/pyterrier_dr faiss-cpu' - -def predict(input, model, append, num_samples): - assert model == MODEL - doc2query.append = append - doc2query.num_samples = num_samples - code = f'''import pandas as pd -from pyterrier_doc2query import Doc2Query - -doc2query = Doc2Query({repr(model)}, append={append}, num_samples={num_samples}) - -doc2query({df2code(input)}) -''' - res = doc2query(input) - vis = generate_vis(res) - return (doc2query(input), code2md(code, COLAB_INSTALL, COLAB_NAME), vis) - -def generate_vis(df): - result = [] - for row in df.itertuples(index=False): - qs = [] - if hasattr(row, 'querygen_score'): - for q, score in zip(row.querygen.split('\n'), row.querygen_score): - bucket = np.searchsorted(PERCENTILES_BY_5, score) - color = COLORS[bucket] - percentile = bucket * 5 - qs.append(f''' -
          -{percentile}th {q} -
          -''') - elif hasattr(row, 'querygen'): - for q in row.querygen.split('\n'): - qs.append(f''' -
          {q}
          -''') - qs = '\n'.join(qs) - if qs: - qs = f''' -
          Expansion Queries:
          -{qs} -''' - text = row.text.replace('\n', '
          ') - result.append(f''' -
          Document: {row.docno}
          -
          -
          -{text} -
          -{qs} -
          -''') - return '\n'.join(result) - -def predict_mm(input, model, num_samples, score_model, filter_pct): - assert model == MODEL - assert score_model == SCORE_MODEL - doc2query.append = False - doc2query.num_samples = num_samples - if filter_pct > 0: - query_filter.t = PERCENTILES_BY_5[filter_pct//5-1] - pipeline = doc2query >> query_scorer >> query_filter - code = f'''import pyterrier as pt ; pt.init() -import pandas as pd -from pyterrier_doc2query import Doc2Query, QueryScorer, QueryFilter -from pyterrier_dr import ElectraScorer - -doc2query = Doc2Query({repr(model)}, append=False, num_samples={num_samples}) -scorer = ElectraScorer({repr(score_model)}) -pipeline = doc2query >> QueryScorer(scorer) >> QueryFilter(append=False, t={query_filter.t}) -# use append=True when indexing; t={query_filter.t} is the {filter_pct}th percentile for generated queries on MS MARCO - -pipeline({df2code(input)}) -''' - else: - pipeline = doc2query >> query_scorer - code = f'''import pyterrier as pt ; pt.init() -import pandas as pd -from pyterrier_doc2query import Doc2Query, QueryScorer -from pyterrier_dr import ElectraScorer - -doc2query = Doc2Query({repr(model)}, append=False, num_samples={num_samples}) -scorer = ElectraScorer({repr(score_model)}) -pipeline = doc2query >> QueryScorer(scorer) - -pipeline({df2code(input)}) -''' - res = pipeline(input) - vis = generate_vis(res) - res['querygen_score'] = res['querygen_score'].apply(lambda x: '[ ' + ', '.join(str(v) for v in x) + ' ]') - return (res, code2md(code, COLAB_INSTALL_MM, COLAB_NAME), vis) - -interface( - MarkdownFile('README.md'), - Demo( - predict, - EX_D, - [ - gr.Dropdown( - choices=[MODEL], - value=MODEL, - label='Model', - interactive=False, - ), gr.Checkbox( - value=doc2query.append, - label="Append", - ), gr.Slider( - minimum=1, - maximum=10, - value=doc2query.num_samples, - step=1., - label='# Queries' - )], - ), - MarkdownFile('mm.md'), - Demo( - predict_mm, - EX_D, - [ - gr.Dropdown( - choices=[MODEL], - value=MODEL, - label='Model', - interactive=False, - ), gr.Slider( - minimum=1, - maximum=10, - value=doc2query.num_samples, - step=1., - label='# Queries' - ), gr.Dropdown( - choices=[SCORE_MODEL], - value=SCORE_MODEL, - label='Scorer', - interactive=False, - ), gr.Slider( - minimum=0, - maximum=95, - value=10, - step=5, - label='Filter (top % of queries)' - )], - ), - MarkdownFile('wrapup.md'), -).launch(share=False) diff --git a/spaces/thapasushil/Multiverse/share_btn.py b/spaces/thapasushil/Multiverse/share_btn.py deleted file mode 100644 index 5bce98ad54d491f9d5691fea427efeccc77690cc..0000000000000000000000000000000000000000 --- a/spaces/thapasushil/Multiverse/share_btn.py +++ /dev/null @@ -1,93 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgCanvas){ - const blob = await new Promise(resolve => imgCanvas.toBlob(resolve)); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - async function getOutoutImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgCanvas = gradioEl.querySelector('canvas[key="drawing"]'); - const outputImgEl = gradioEl.querySelector('#output-img img'); - const promptTxt = gradioEl.querySelector('#input-text textarea').value; - let titleTxt = promptTxt; - if(titleTxt.length > 100){ - titleTxt = titleTxt.slice(0, 100) + ' ...'; - } - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!outputImgEl){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputImgFile = await getInputImgFile(inputImgCanvas); - const outputImgFile = await getOutoutImgFile(outputImgEl); - const files = [inputImgFile, outputImgFile]; - - const urls = await Promise.all(files.map((f) => uploadFile(f))); - - const htmlImgs = urls.map(url => ``); - const [inputImgUrl, outputImgUrl] = htmlImgs; - - const descriptionMd = `
          -
          -${inputImgUrl} - -${promptTxt} -
          -
          -${outputImgUrl} -
          -
          `; - - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - - const paramsStr = params.toString(); - window.open(`${window.location.href}/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/thejagstudio/procom/amazon/tests.py b/spaces/thejagstudio/procom/amazon/tests.py deleted file mode 100644 index 7ce503c2dd97ba78597f6ff6e4393132753573f6..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/amazon/tests.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.test import TestCase - -# Create your tests here. diff --git a/spaces/thiagolira/ChatPequenoPrincipe/cli_app.py b/spaces/thiagolira/ChatPequenoPrincipe/cli_app.py deleted file mode 100644 index 20fd8a7af75f42f506c8230d673d23b2eea39cb6..0000000000000000000000000000000000000000 --- a/spaces/thiagolira/ChatPequenoPrincipe/cli_app.py +++ /dev/null @@ -1,17 +0,0 @@ -import pickle -from query_data import get_chain - - -if __name__ == "__main__": - with open("vectorstore.pkl", "rb") as f: - vectorstore = pickle.load(f) - qa_chain = get_chain(vectorstore) - chat_history = [] - print("Chat with your docs!") - while True: - print("Human:") - question = input() - result = qa_chain({"question": question, "chat_history": chat_history}) - chat_history.append((question, result["answer"])) - print("AI:") - print(result["answer"]) diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free Download and install the best audio plugin bundle.md b/spaces/tialenAdioni/chat-gpt-api/logs/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free Download and install the best audio plugin bundle.md deleted file mode 100644 index be98209e5b1980f026601c92700e5da592410a90..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free Download and install the best audio plugin bundle.md +++ /dev/null @@ -1,69 +0,0 @@ - -

          How to Download and Install BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free

          - -

          If you are looking for a plugin bundle that can enhance your audio production with the legendary BBE sound, you might want to check out BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free. This is a collection of four amazing plugins that can give you more brightness, clarity, fullness, presence, and loudness in your mixes and masters.

          -

          BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free


          Downloadhttps://urlcod.com/2uKaeS



          - -

          In this article, we will tell you everything you need to know about BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, including its features, benefits, drawbacks, and how to download and install it on your computer. We will also give you some tips on how to optimize your SEO for this keyword and rank highly by search engine algorithms.

          - -

          What are the features of BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free?

          - -

          BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free is a plugin bundle that consists of four plugins that bring the legendary BBE sound into your digital domain. The plugins are:

          - -
            -
          • D82 Sonic Maximizer: This plugin is a software version of the BBE Sonic Maximizer hardware unit that has been used by professional musicians and studio engineers for years. It is designed to improve the sound quality of any audio source by adding more sparkle and depth to the high frequencies and more richness and punch to the low frequencies.
          • -
          • H82 Harmonic Maximizer: This plugin is a harmonic enhancer that increases presence and clarity, restores natural brightness, and adds deeper and extended low frequencies. It can be used on individual tracks or an entire mix, or live to enhance the sound of a P.A. system.
          • -
          • L82 Loudness Maximizer: This plugin is a mixing and mastering multi-band limiter that can dramatically increase the overall level of your mix without audible artifacts and pumping effects. It has ultimate transparency and allows you to control the loudness of each frequency band separately.
          • -
          • Mach 3 Bass: This plugin is a new addition to the Sonic Sweet lineup that can take your low end to new sonic depths. It can boost any type of bass instrument or signal with a powerful sub-harmonic synthesizer that adds more weight and body to your bass.
          • -
          - -

          What are the benefits of using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free?

          - -

          Using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free has some benefits for audio producers, such as:

          - -
            -
          • You can access the BBE processing technology within the digital domain without having to buy or use the hardware units.
          • -
          • You can enhance your audio production with the same brightness, clarity, fullness, presence, and loudness as the hardware modules with the added bonus of being fully automatable.
          • -
          • You can use the plugins on any audio source, such as vocals, guitars, drums, synths, etc., and get professional results.
          • -
          • You can use the plugins in any DAW that supports VST or RTAS formats, such as Cubase, Pro Tools, Logic, etc.
          • -
          • You can download and install the plugin bundle for free from reliable sources online.
          • -
          - -

          What are the drawbacks of using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free?

          - -

          Using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free also has some drawbacks for audio producers, such as:

          - -
            -
          • You might encounter some compatibility or performance issues when using the plugins on some DAWs or systems.
          • -
          • You might lose some features or graphics that are only available in the latest version of the Sonic Sweet bundle (version 4.0), which is not free.
          • -
          • You might confuse yourself or others when using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, as it is not an official product from BBE Sound or Nomad Factory.
          • -
          • You might violate some rules or terms of use when using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, as it is not a legal or ethical product.
          • -
          - -

          How to download and install BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free?

          - -

          If you want to download and install BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, you need to follow these steps:

          - -
            -
          1. Find a reliable source online that offers BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free. You can find some links on AudioDeluxe or 440Software.
          2. -
          3. Download the zip file to your computer and extract it to a folder.
          4. -
          5. Open your DAW and scan for new plugins or add them manually.
          6. -
          7. Select one of the plugins from the Sonic Sweet bundle and insert it on your audio track or bus.
          8. -
          9. Adjust the parameters and settings according to your preference and enjoy using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free.
          10. -
          - -

          How to optimize your SEO for BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free?

          - -

          If you want to optimize your SEO for BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, you need to follow some tips, such as:

          - -
            -
          1. Use the keyword BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free in your title, headers, and content. This can make your article more relevant and visible to search engines and users.
          2. -
          3. Use synonyms, variations, and related words of BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free in your content. This can make your article more diverse and natural to search engines and users.
          4. -
          5. Use links, images, videos, and other media related to BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free in your content. This can make your article more informative and attractive to search engines and users.
          6. -
          7. Use HTML formatting, headings, lists, bullet points, and other elements to organize your content. This can make your article more readable and user-friendly to search engines and users.
          8. -
          9. Use meta tags, descriptions, keywords, and other elements to optimize your page. This can make your article more accessible

            -

            Conclusion

            - -

            BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free is a plugin bundle that can enhance your audio production with the legendary BBE sound. It consists of four plugins that can give you more brightness, clarity, fullness, presence, and loudness in your mixes and masters. It has some benefits and drawbacks for audio producers, and it requires some steps to download and install it on your computer. It also requires some tips to optimize your SEO for it and rank highly by search engine algorithms. If you want to use BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free, you need to download it from a reliable source online, extract it to a folder on your computer, open your DAW and insert the plugins on your audio tracks or buses, adjust the parameters and settings according to your preference, and enjoy using BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free. We hope you found this article helpful and informative. Thank you for reading.

            679dcb208e
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Filantropica film romanesc 17 A clever and original comedy that you can download legally.md b/spaces/tialenAdioni/chat-gpt-api/logs/Filantropica film romanesc 17 A clever and original comedy that you can download legally.md deleted file mode 100644 index 2c8a89499cfda87b2a016404bd9886eb26ae797b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Filantropica film romanesc 17 A clever and original comedy that you can download legally.md +++ /dev/null @@ -1,29 +0,0 @@ - -

            How to Download Filantropica, a Romanian Comedy Film from 2002

            -

            Filantropica is a Romanian comedy film from 2002, directed by Nae Caranfil and starring Mircea Diaconu, Gheorghe Dinică, Mara Nicolescu, Viorica Vodă and Florin Zamfirescu. It tells the story of Ovidiu, a modest high school teacher who falls in love with a young aspiring model, but can't afford to take her out on dates. He gets involved in a scheme orchestrated by the official "writer" of the Bucharest beggars, who makes him act in various scenarios to manipulate people's generosity. The film was nominated for the Oscar for Best Foreign Language Film in 2003 and received critical acclaim for its witty and cynical satire of the Romanian society.

            -

            download filantropica film romanesc 17


            Download File ✓✓✓ https://urlcod.com/2uK8Xs



            -

            If you want to watch this hilarious and insightful film, you might be wondering how to download Filantropica film romanesc 17. In this article, we will show you some of the best ways to do that legally and safely.

            -

            Method 1: YouTube

            -

            One of the easiest and most accessible ways to download Filantropica film romanesc 17 is to use YouTube. The film is available in full HD on YouTube, uploaded by Petru 74, a channel that specializes in Romanian films. You can watch it online or download it for offline viewing using a YouTube downloader app or website. Here are the steps to follow:

            -
              -
            1. Go to https://www.youtube.com/watch?v=kM9kf30coZY, which is the link to the film on YouTube.
            2. -
            3. Copy the URL of the video from the address bar of your browser.
            4. -
            5. Go to a YouTube downloader website or app of your choice. Some examples are Y2mate, SaveFrom.net, 4K Video Downloader, etc.
            6. -
            7. Paste the URL of the video into the input box of the downloader and click on the download button.
            8. -
            9. Select the quality and format of the video you want to download. For example, you can choose MP4, 1080p, 720p, etc.
            10. -
            11. Wait for the download to finish and enjoy watching Filantropica film romanesc 17 on your device.
            12. -
            -

            Method 2: Torrent

            -

            Another way to download Filantropica film romanesc 17 is to use a torrent client and a torrent file or magnet link. Torrenting is a peer-to-peer file sharing method that allows you to download large files from multiple sources at once. However, torrenting can also be risky and illegal if you don't use a VPN and if you download copyrighted content without permission. Therefore, we advise you to be careful and responsible when using this method. Here are the steps to follow:

            -
              -
            1. Download and install a torrent client on your device. Some examples are uTorrent, BitTorrent, qBittorrent, etc.
            2. -
            3. Go to a torrent website or app of your choice. Some examples are The Pirate Bay, RARBG, 1337x, etc.
            4. -
            5. Search for Filantropica film romanesc 17 on the torrent website or app. You can also use keywords like "Filantropica 2002", "Filantropica Romania", etc.
            6. -
            7. Select a torrent file or magnet link that has good ratings, comments, seeds and peers. Seeds are users who have the complete file and peers are users who have parts of the file.
            8. -
            9. Download the torrent file or copy the magnet link and open it with your torrent client.
            10. -
            11. Wait for the download to finish and enjoy watching Filantropica film romanesc 17 on your device.
            12. -
            -

            Conclusion

            -

            Filantropica is a Romanian comedy film from 2002 that you can download and watch using YouTube or torrenting methods. However, before you do that, make sure you respect the rights of the creators and distributors of the film and use a VPN if necessary. We hope this article was helpful and informative for you. If

            e753bf7129
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Hdd Regenerator 2011 Serial Number.md b/spaces/tialenAdioni/chat-gpt-api/logs/Hdd Regenerator 2011 Serial Number.md deleted file mode 100644 index ca19c367aea5393a43fcc895a46a93b5a848e378..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Hdd Regenerator 2011 Serial Number.md +++ /dev/null @@ -1,72 +0,0 @@ -
            -

            How to Use HDD Regenerator 2011 with Serial Number

            -

            HDD Regenerator 2011 is a software tool that can help you scan your hard drive and repair bad sectors that may affect your data and performance. Bad sectors are physical defects on the disk surface that can cause errors, crashes, or slow down your system. HDD Regenerator 2011 can detect and fix these problems without losing your data or formatting your drive.

            -

            Hdd Regenerator 2011 Serial Number


            Download Ziphttps://urlcod.com/2uK6ef



            -

            In this article, we will show you how to use HDD Regenerator 2011 with a serial number to repair your hard drive and improve its performance. You will need a valid serial number to activate the full version of the software. You can purchase one from the official website or find one online.

            -

            Step 1: Download and Install HDD Regenerator 2011

            -

            The first step is to download and install HDD Regenerator 2011 on your computer. You can download it from the official website[^2^] or from other sources. The file size is about 7.9 MB and it works on Windows XP, Vista, 7, 8, and 10.

            -

            Once you have downloaded the file, run it and follow the instructions to install the software. You may need to restart your computer after the installation.

            -

            Step 2: Enter Your Serial Number

            -

            The next step is to enter your serial number to activate the full version of HDD Regenerator 2011. You can find your serial number in your email confirmation if you purchased it from the official website or in the text file if you downloaded it from other sources.

            -

            To enter your serial number, launch HDD Regenerator 2011 and click on the "Enter Key" button at the bottom right corner of the main window. A dialog box will appear where you can type or paste your serial number. Click on "OK" to confirm.

            -

            Step 3: Scan and Repair Your Hard Drive

            -

            The final step is to scan and repair your hard drive using HDD Regenerator 2011. You can choose between two modes: scan only or scan and repair.

            -

            The scan only mode will check your hard drive for bad sectors and display a report of their location and status. This mode is useful if you want to see how damaged your hard drive is before repairing it.

            -

            The scan and repair mode will not only check your hard drive for bad sectors but also attempt to recover them using a special algorithm. This mode is recommended if you want to fix your hard drive and restore its functionality.

            -

            To start scanning and repairing your hard drive, select the mode you want and then choose the drive letter of your hard drive from the drop-down menu. Click on "Start Process" to begin. The software will show you a progress bar and a log of its actions. Depending on the size and condition of your hard drive, this process may take several hours or even days.

            -

            Hdd Regenerator 2011 Crack Keygen Download
            -How to Activate Hdd Regenerator 2011 with Serial Number
            -Hdd Regenerator 2011 License Key Free Download
            -Hdd Regenerator 2011 Full Version with Serial Number
            -Hdd Regenerator 2011 Serial Number Generator Online
            -Hdd Regenerator 2011 Registration Code Free
            -Hdd Regenerator 2011 Product Key Activation
            -Hdd Regenerator 2011 Serial Number Valid
            -Hdd Regenerator 2011 Keygen Torrent Download
            -Hdd Regenerator 2011 Serial Number Finder
            -Hdd Regenerator 2011 Crack Serial Number
            -Hdd Regenerator 2011 Patch with Serial Number
            -Hdd Regenerator 2011 Serial Number Working
            -Hdd Regenerator 2011 Activation Code Free Download
            -Hdd Regenerator 2011 Serial Number and Crack
            -Hdd Regenerator 2011 Keygen Free Download
            -Hdd Regenerator 2011 Serial Number for Windows
            -Hdd Regenerator 2011 License Code Free
            -Hdd Regenerator 2011 Full Crack with Serial Number
            -Hdd Regenerator 2011 Serial Number Online
            -Hdd Regenerator 2011 Keygen Download
            -How to Get Hdd Regenerator 2011 Serial Number
            -Hdd Regenerator 2011 Serial Number Free Download
            -Hdd Regenerator 2011 Crack Download with Serial Number
            -Hdd Regenerator 2011 Serial Number and Keygen
            -Hdd Regenerator 2011 Activation Key Free Download
            -Hdd Regenerator 2011 Serial Number for Mac
            -Hdd Regenerator 2011 Registration Key Free
            -Hdd Regenerator 2011 Full Version Download with Serial Number
            -Hdd Regenerator 2011 Serial Number Crack Download
            -Hdd Regenerator 2011 Keygen Free
            -How to Use Hdd Regenerator 2011 with Serial Number
            -Hdd Regenerator 2011 Serial Number Download Free
            -Hdd Regenerator 2011 Crack Free Download with Serial Number
            -Hdd Regenerator 2011 Serial Number and Patch
            -Hdd Regenerator 2011 License Key Download Free
            -Hdd Regenerator 2011 Serial Number for Linux
            -Hdd Regenerator 2011 Activation Code Download Free
            -Hdd Regenerator 2011 Full Crack Download with Serial Number
            -Hdd Regenerator 2011 Serial Number Generator Free
            -Hdd Regenerator 2011 Keygen Torrent Free Download
            -How to Install Hdd Regenerator 2011 with Serial Number
            -Hdd Regenerator 2011 Serial Number Free Online
            -Hdd Regenerator 2011 Crack Torrent Download with Serial Number
            -Hdd Regenerator 2011 Serial Number and License Key
            -Hdd Regenerator 2011 Product Key Download Free
            -Hdd Regenerator 2011 Serial Number for Android
            -Hdd Regenerator 2011 Registration Code Download Free
            -Hdd Regenerator 2011 Full Version Free Download with Serial Number
            -Hdd Regenerator 2011 Serial Number Generator Online Free

            -

            When the process is finished, you will see a message saying "Done" and a summary of the results. You can save the log file for future reference or close the software.

            -

            Conclusion

            -

            HDD Regenerator 2011 is a powerful tool that can help you repair bad sectors on your hard drive and improve its performance. By using a serial number, you can unlock the full version of the software and access all its features. To use HDD Regenerator 2011 with a serial number, you need to download and install the software, enter your serial number, and scan and repair your hard drive.

            -

            We hope this article was helpful for you. If you have any questions or comments, please feel free to leave them below.

            e753bf7129
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Jpeg Repair Tool Why You Need It and How to Use It.md b/spaces/tialenAdioni/chat-gpt-api/logs/Jpeg Repair Tool Why You Need It and How to Use It.md deleted file mode 100644 index 1b931c5f326d4743f8a43540a79ae167e17f51e4..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Jpeg Repair Tool Why You Need It and How to Use It.md +++ /dev/null @@ -1,163 +0,0 @@ -
            -

            JPEG Repair Tool: How to Fix Corrupted JPEG or JPG Files

            -

            Have you ever encountered a situation where you try to open a JPEG or JPG file on your computer, camera, phone or other devices, but you get an error message saying that the file cannot be opened or viewed? Or you see a distorted or incomplete image with grey areas, pixelated colors, or strange artifacts? If yes, then you are dealing with a corrupted JPEG file.

            -

            Jpeg repair tool


            DOWNLOAD >>> https://urlcod.com/2uK9o1



            -

            Corrupted JPEG files can be very frustrating, especially if they contain precious memories or important information. You may wonder what causes JPEG corruption and how to fix it. Don't worry, in this article, we will explain everything you need to know about JPEG corruption and how to repair corrupted JPEG files with a professional JPEG repair tool or an online service. We will also share some tips on how to prevent JPEG corruption and protect your photos.

            -

            What is JPEG and Why It Can Get Corrupted

            -

            JPEG (Joint Photographic Experts Group) is a widely used method of lossy compression for digital images, especially for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.

            -

            JPEG is also the most common image format used by digital cameras and other photographic image capture devices. It is also the standard format for storing and transmitting photographic images on the web. The file extensions for this format are .jpg and .jpeg.

            -

            How to fix corrupted jpeg files online
            -Best jpeg repair software for Windows 10
            -Recover deleted or lost jpeg photos from SD card
            -Repair damaged jpeg images with Stellar Photo Repair
            -Jpeg repair tool free download full version
            -Fix invalid jpeg file header error
            -Restore jpeg files from formatted hard drive
            -Jpeg recovery pro crack serial keygen
            -Repair jpeg files after virus attack
            -Jpeg repair toolkit review and comparison
            -How to repair broken jpeg thumbnails
            -Recover jpeg files from corrupted USB flash drive
            -Jpeg repair online service - no download required
            -Fix jpeg color distortion and noise issues
            -Repair multiple jpeg files in batch mode
            -Jpeg file repair mac - how to fix jpeg files on mac
            -Recover jpeg files from recycle bin or trash
            -Jpeg repair tool reddit - best recommendations from users
            -How to repair grayscale jpeg images
            -Jpeg repair tool android - fix jpeg files on mobile devices
            -Recover jpeg files from camera memory card
            -Jpeg repair tool linux - how to fix jpeg files on linux
            -Fix jpeg file size and resolution problems
            -Repair jpeg files with Photoshop or GIMP
            -Jpeg repair tool for iPhone and iPad
            -Recover jpeg files from damaged CD or DVD
            -Jpeg repair tool chrome extension - fix jpeg files in browser
            -How to repair blurry or pixelated jpeg images
            -Repair jpeg files with exif data errors
            -Jpeg repair tool for Windows 7 and 8
            -Recover jpeg files from Dropbox or Google Drive
            -Jpeg repair tool portable - run without installation
            -How to repair rotated or flipped jpeg images
            -Repair jpeg files with bad sectors or CRC errors
            -Jpeg repair tool for Outlook and Gmail attachments
            -Recover jpeg files from Snapchat or Instagram stories
            -Jpeg repair tool open source - free and reliable solutions
            -How to repair split or merged jpeg images
            -Repair jpeg files with missing data or parts
            -Jpeg repair tool for Facebook and Twitter photos
            -Recover jpeg files from WhatsApp or Telegram chats
            -Jpeg repair tool alternative - other options to try out
            -How to repair encrypted or password protected jpeg images
            -Repair jpeg files with watermark or logo removal
            -Jpeg repair tool coupon code - get discount and save money
            -Recover jpeg files from screen capture or screenshot tools
            -Jpeg repair tool FAQ - common questions and answers

            -

            However, JPEG files are not immune to corruption. There are various reasons why a JPEG file can get corrupted, such as:

            -
              -
            • Broken or corrupted header of the JPEG file
            • -
            • Corruption in JPEG image data, such as half grey image
            • -
            • Viruses or malware attack
            • -
            • The storage device has bad sectors or the file system has corrupted
            • -
            • Accidental deletion or formatting of the storage device
            • -
            • Improper transfer or download of the JPEG file
            • -
            • Power failure or system crash during editing or saving the JPEG file
            • -
            -

            When a JPEG file is corrupted, you may experience some symptoms like:

            -
              -
            • The file cannot be opened or viewed by any program
            • -
            • The file size is reduced significantly or increased abnormally
            • -
            • The file name or extension is changed or missing
            • -
            • The image is distorted, blurred, pixelated, or split into multiple parts
            • -
            • The image has grey areas, black lines, color shifts, or other artifacts
            • -
            -

            How to Repair Corrupted JPEG Files with a Professional JPEG Repair Tool

            -

            If you have encountered any of the above situations, don't panic. There is still a chance to repair your corrupted JPEG files and make them accessible and viewable again. One of the most effective methods is using a professional JPEG repair tool that can fix various types of corruption issues in JPEG files.

            -

            A good JPEG repair tool can offer you many benefits and features, such as:

            -
              -
            • Repair corrupted, broken, damaged, encrypted, unreadable, or inaccessible JPEG files
            • -
            • Restore corrupt header, invalid file structure, and image data of JPEG files
            • -
            • Repair multiple formats of images simultaneously, such as JPG, PNG, GIF, TIFF, BMP, etc.
            • -
            • Extract thumbnails of severely corrupt JPEG files
            • -
            • Preview repaired images before saving them
            • -
            • Save repaired images in a new folder without overwriting the original files
            • -
            • Support all types of storage devices used in cameras, phones, computers, etc.
            • -
            -

            To use a JPEG repair tool, you need to follow these steps:

            -
              -
            1. Download and install a reliable JPEG repair tool on your computer. You can search online for some reputable options or check out some reviews from other users.

            2. -
            3. Launch the JPEG repair tool and select "Add File" to browse and add the corrupted JPEG files that you want to repair. You can also drag and drop the files directly.

            4. -
            5. The JPEG repair tool will scan the selected files and display a preview of the repaired images. You can check the quality and details of the images before saving them.

            6. -
            7. If you are satisfied with the results, click on "Save Repaired Files" to choose a destination folder and save the repaired images on your computer.

            8. -
            9. You can now open and view your repaired images with any program that supports the image format.

            10. -
            -

            To get the best results from using a JPEG repair tool, here are some tips and tricks that you can follow:

            -
              -
            • Do not edit or modify the corrupted JPEG files before repairing them.

            • -
            • Do not save the repaired images on the same device where the original files are stored.

            • -
            • If possible, use another copy of the same image taken on the same device with same settings and resolution as a reference file for repairing.

            • -
            • If the JPEG repair tool fails to fix your corrupted images completely, you can try another JPEG repair tool or contact their customer support for assistance.

            • -
            -

            How to Repair Corrupted JPEG Files Online

            -

            If you don't want to download or install any software on your computer, you can also try to repair corrupted JPEG files online by using an online JPEG repair service. An online JPEG repair service is a web-based application that allows you to upload and fix your corrupted images without any hassle.

            -

            An online JPEG repair service has some advantages and disadvantages that you should consider before using it:

            - - - - - - - - - - - -
            AdvantagesDisadvantages
            - No need to download or install any software - Easy and fast to use - Free or low-cost options available - Support various image formats- Limited features and functions - Depend on internet connection and speed - Risk of privacy and security issues - No guarantee of successful repair
            -

            If you decide to use an online JPEG repair service, you can compare some of the popular options online and choose the one that suits your needs. Here are some examples of online JPEG repair tools that you can try:

            -
              -
            • JPG.Repair: This online tool can repair damaged JPG, CR2, CR3, RAW pictures created by professional cameras. It can also extract thumbnails of severely corrupt JPEG files. It is free to use for up to 10 files per day.
            • -
            • OfficeRecovery for PixRecovery Online: This online tool can repair corrupted JPEG, GIF, TIFF, BMP, PNG or RAW images. It can also restore the original dimensions and color depth of the images. It offers free and paid options to download the repaired files.
            • -
            • Pix Fix: This online tool can clean up images that have been damaged by noise and excessive JPEG compression. It can also enhance the quality and clarity of the images. It is completely free to use.
            • -
            -

            To use an online JPEG repair service, you need to follow these steps:

            -
              -
            1. Go to the website of the online JPEG repair tool that you have chosen.

            2. -
            3. Select "Upload" or "Browse" to choose the corrupted JPEG files that you want to repair. You may need to agree to the terms of service or privacy policy before uploading.

            4. -
            5. The online JPEG repair tool will process your files and show you a preview or a sample of the repaired images. You can check the quality and details of the images before downloading.

            6. -
            7. If you are satisfied with the results, select "Download" or "Save" to get the repaired images on your computer. You may need to sign up for a free account or pay a fee depending on the service you use.

            8. -
            9. You can now open and view your repaired images with any program that supports the image format.

            10. -
            -

            How to Prevent JPEG Corruption and Protect Your Photos

            -

            Now that you know how to repair corrupted JPEG files with a professional JPEG repair tool or an online service, you may also want to know how to prevent JPEG corruption and protect your photos in the future. Here are some best practices that you can follow:

            -
              -
            • Always use a reliable program or device to open, view, edit, save, transfer or download your JPEG files.

            • -
            • Always eject or remove your storage device safely from your computer or camera before unplugging it.

            • -
            • Always backup your important photos regularly on another device or cloud service.

            • -
            • Always scan and remove any viruses or malware from your devices with a trusted antivirus software.

            • -
            • Always avoid any physical damage or extreme conditions for your devices that store your photos.

            • -
            -

            Conclusion

            -

            In conclusion, JPEG is a popular image format that can get corrupted due to various reasons. When this happens, you may lose access to your precious photos or see distorted images. However, there are ways to fix this problem and restore your photos. You can use a professional JPEG repair tool or an online service to repair your corrupted JPEG files. You can also follow some tips to prevent JPEG corruption and protect your photos in the future.

            -

            We hope this article has helped you understand how to repair corrupted JPEG files with a JPEG repair tool. If you have any questions or suggestions, please feel free to leave a comment below.

            -

            FAQs

            -

            Q1: What is the difference between JPEG and JPG?

            -

            A1: JPEG and JPG are essentially the same thing. They both refer to the same image format that uses lossy compression. The only difference is that JPG is a shorter version of JPEG that was used by some older systems that had a limit of three letters for file extensions.

            -

            Q2: Can I repair corrupted JPEG files for free?

            -

            A2: Yes, you can. There are some free options available for repairing corrupted JPEG files, such as using another picture viewer or converter, using an online JPEG repair service, or using a free trial version of a professional JPEG repair tool. However, these options may have some limitations or drawbacks, such as low quality, limited features, privacy risks, or no guarantee of success.

            -

            Q3: How can I tell if my JPEG file is corrupted?

            -

            A3: There are some signs that can indicate if your JPEG file is corrupted, such as:

            -
              -
            • The file cannot be opened or viewed by any program
            • -
            • The file size is reduced significantly or increased abnormally
            • -
            • The file name or extension is changed or missing
            • -
            • The image is distorted, blurred, pixelated, or split into multiple parts
            • -
            • The image has grey areas, black lines, color shifts, or other artifacts
            • -
            -

            If you see any of these signs, you should try to repair your JPEG file as soon as possible.

            -

            Q4: Can I repair multiple JPEG files at once?

            -

            A4: Yes, you can. Some professional JPEG repair tools and online services allow you to repair multiple JPEG files at once by selecting them together or uploading them in a zip file. This can save you time and effort if you have many corrupted JPEG files to fix.

            -

            Q5: Can I recover deleted or lost JPEG files?

            -

            A5: Yes, you can. If you have accidentally deleted or lost your JPEG files from your device or storage media, you can try to recover them with a data recovery software. A data recovery software can scan your device or storage media and find the deleted or lost files for you. You can then preview and recover them with ease. However, you should act quickly before the deleted or lost files are overwritten by new data.

            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK Netflix Premium The Secret to Free Netflix Streaming.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK Netflix Premium The Secret to Free Netflix Streaming.md deleted file mode 100644 index c9b7a62cf059935c292e1678211e4bb5d2c2f7e2..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK Netflix Premium The Secret to Free Netflix Streaming.md +++ /dev/null @@ -1,109 +0,0 @@ -
            -

            APK Netflix Premium: How to Watch Netflix for Free on Your Android Device

            -

            Netflix is one of the most popular streaming services in the world, offering a wide range of movies, TV shows, documentaries, and more. However, not everyone can afford to pay for a monthly subscription or access all the features and content that Netflix has to offer. That's why some people look for alternative ways to watch Netflix for free on their Android devices.

            -

            One of these ways is using APK Netflix Premium, a modified version of the official Netflix app that allows you to watch unlimited content for free without any restrictions or limitations. But what is APK Netflix Premium exactly and how does it work? Is it safe and legal to use? What are the benefits and risks of using it? In this article, we will answer all these questions and more.

            -

            apk netflix premium


            DOWNLOADhttps://bltlly.com/2uOpbO



            -

            What is Netflix and why is it so popular?

            -

            Netflix is a streaming service that offers a wide range of movies, TV shows, documentaries, and more.

            -

            Netflix is a streaming service that allows you to watch thousands of titles from different genres, languages, countries, and categories. You can watch anything from blockbuster movies, award-winning TV shows, original series, documentaries, anime, comedy specials, reality shows, and more.

            -

            Netflix also produces its own content, known as Netflix Originals, which are exclusive to the platform and often receive critical acclaim and popularity. Some examples of Netflix Originals are Stranger Things, The Crown, The Witcher, Black Mirror, The Queen's Gambit, Bridgerton, Lupin, Money Heist, etc.

            -

            Netflix has over 200 million subscribers worldwide and is available in more than 190 countries.

            -

            Netflix is one of the most successful streaming services in the world, with over 200 million paid subscribers as of December 2020. It

            Netflix is available in more than 190 countries and regions, and supports over 30 languages. You can watch Netflix on almost any device that has an internet connection, such as smartphones, tablets, computers, smart TVs, gaming consoles, streaming devices, etc.

            -

            Netflix offers different plans and features depending on your preferences and budget.

            -

            Netflix offers four different plans for its subscribers: Basic, Standard, Premium, and Ultra. Each plan has different prices, features, and limitations. Here is a table that compares the four plans:

            - | Plan | Price | Features | Limitations | | --- | --- | --- | --- | | Basic | $8.99 per month | Watch on one screen at a time in standard definition (SD) | No HD or UHD quality, no downloads | | Standard | $13.99 per month | Watch on two screens at a time in high definition (HD) | No UHD quality, limited downloads | | Premium | $17.99 per month | Watch on four screens at a time in ultra high definition (UHD) | Unlimited downloads | | Ultra | $19.99 per month | Watch on four screens at a time in ultra high definition (UHD) with high dynamic range (HDR) and Dolby Atmos sound | Unlimited downloads |

            Some of the features that Netflix offers are:

            -
              -
            • Downloads: You can download titles to your device and watch them offline.
            • -
            • Profiles: You can create up to five profiles for different users and preferences.
            • -
            • Recommendations: You can get personalized recommendations based on your viewing history and ratings.
            • -
            • Parental controls: You can set up parental controls to restrict access to certain titles or categories.
            • -
            • Accessibility: You can enable subtitles, captions, audio descriptions, or alternate audio for different languages.
            • -
            -

            What is APK Netflix Premium and how does it work?

            -

            APK Netflix Premium is a modified version of the official Netflix app that allows you to watch unlimited content for free.

            -

            APK Netflix Premium is a third-party app that is not affiliated with or endorsed by Netflix. It is a modified version of the official Netflix app that bypasses the subscription and login requirements of the original app and lets you watch unlimited content for free.

            -

            APK Netflix Premium works by using fake or hacked accounts to access the Netflix servers and stream the content to your device. You do not need to create an account or enter any personal information to use APK Netflix Premium. You just need to download and install the app on your device and start watching.

            -

            apk netflix premium unlocked
            -apk netflix premium mod
            -apk netflix premium download
            -apk netflix premium free
            -apk netflix premium latest version
            -apk netflix premium 2023
            -apk netflix premium hack
            -apk netflix premium no ads
            -apk netflix premium offline
            -apk netflix premium 4k
            -apk netflix premium android
            -apk netflix premium cracked
            -apk netflix premium update
            -apk netflix premium install
            -apk netflix premium link
            -apk netflix premium features
            -apk netflix premium reddit
            -apk netflix premium review
            -apk netflix premium account
            -apk netflix premium generator
            -apk netflix premium for pc
            -apk netflix premium for ios
            -apk netflix premium for firestick
            -apk netflix premium for smart tv
            -apk netflix premium for chromebook
            -apk netflix premium alternative
            -apk netflix premium bypass
            -apk netflix premium cookies
            -apk netflix premium code
            -apk netflix premium clone
            -apk netflix premium telegram
            -apk netflix premium tutorial
            -apk netflix premium trial
            -apk netflix premium unlimited movies and tv shows
            -apk netflix premium user guide
            -apk netflix premium video quality
            -apk netflix premium vpn
            -apk netflix premium vs original
            -apk netflix premium with subtitles
            -apk netflix premium without root

            -

            APK Netflix Premium bypasses the subscription and login requirements of the original app and lets you access all the features and categories.

            -

            With APK Netflix Premium, you do not need to pay any subscription fees or sign in with any credentials to watch Netflix. You can access all the features and categories of the original app, such as downloads, profiles, recommendations, parental controls, accessibility, etc.

            -

            You can also access all the content that is available on Netflix, including the Netflix Originals, movies, TV shows, documentaries, anime, comedy specials, reality shows, etc. You can browse through different genres, languages, countries, and categories to find what you want to watch.

            -

            APK Netflix Premium also enables you to watch content in high quality, such as 4K and UHD, without any buffering or ads.

            -

            One of the main advantages of APK Netflix Premium is that it allows you to watch content in high quality, such as 4K and UHD, without any buffering or ads. You can enjoy the best picture and sound quality possible on your device without any interruptions or distractions.

            -

            APK Netflix Premium also supports HDR and Dolby Atmos sound for some titles, which enhance the color and contrast of the image and the depth and clarity of the sound. You can experience a more immersive and realistic viewing experience with APK Netflix Premium.

            -

            How to download and install APK Netflix Premium on your Android device?

            -

            To download and install APK Netflix Premium on your Android device, you need to follow these simple steps:

            -

            Step 1: Enable unknown sources on your device settings.

            -

            Before you can install APK Netflix Premium on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this:

            -
              -
            • Go to your device settings and tap on Security or Privacy.
            • -
            • Find the option that says Unknown sources or Install unknown apps and toggle it on.
            • -
            • A warning message will appear. Tap on OK or Allow to confirm.
            • -
            -

            Step 2: Premium.

            -

            You can access all the features and categories of the original app, such as downloads, profiles, recommendations, etc.

            -

            A third benefit of using APK Netflix Premium is that you can access all the features and categories of the original app, such as downloads, profiles, recommendations, parental controls, accessibility, etc. You can download titles to your device and watch them offline. You can create up to five profiles for different users and preferences. You can get personalized recommendations based on your viewing history and ratings. You can set up parental controls to restrict access to certain titles or categories. You can enable subtitles, captions, audio descriptions, or alternate audio for different languages.

            -

            Risks of using APK Netflix Premium:

            -

            You may violate the terms and conditions of Netflix and face legal consequences.

            -

            The main risk of using APK Netflix Premium is that you may violate the terms and conditions of Netflix and face legal consequences. Netflix does not allow the use of any unauthorized or modified apps that access its service without permission or payment. Netflix may detect the use of APK Netflix Premium and terminate your access or take legal action against you. You may also be liable for damages or losses caused by your use of APK Netflix Premium.

            -

            You may expose your device to malware or viruses that can harm your data or privacy.

            -

            Another risk of using APK Netflix Premium is that you may expose your device to malware or viruses that can harm your data or privacy. APK Netflix Premium is not an official app and is not verified by Google Play Protect. It may contain malicious code or hidden functions that can infect your device or steal your information. You may also download the APK file from untrusted sources that may inject malware or viruses into the file. You should always be careful when downloading and installing any app from unknown sources and scan them with a reliable antivirus software.

            -

            You may experience some bugs or glitches that can affect the performance or functionality of the app.

            -

            A third risk of using APK Netflix Premium is that you may experience some bugs or glitches that can affect the performance or functionality of the app. APK Netflix Premium is not an official app and is not updated or maintained by Netflix. It may not be compatible with the latest version of Netflix or your device. It may also have some errors or defects that can cause the app to crash, freeze, lag, or malfunction. You may not be able to watch some titles or access some features due to these issues.

            -

            Conclusion and FAQs

            -

            In conclusion, APK Netflix Premium is a modified version of the official Netflix app that allows you to watch unlimited content for free on your Android device. It has some benefits, such as watching content in high quality, accessing all the features and categories, and saving money on subscription fees. However, it also has some risks, such as violating the terms and conditions of Netflix, exposing your device to malware or viruses, and experiencing some bugs or glitches.

            -

            If you want to use APK Netflix Premium, you should be aware of these benefits and risks and make an informed decision. You should also follow the steps above to download and install APK Netflix Premium safely and correctly on your device.

            -

            Here are some FAQs that you may have about APK Netflix Premium:

            -
              -
            • Q: Is APK Netflix Premium legal?
            • -
            • A: No, APK Netflix Premium is not legal. It violates the terms and conditions of Netflix and infringes on its intellectual property rights. Using APK Netflix Premium may result in legal action from Netflix or other authorities.
            • -
            • Q: Is APK Netflix Premium safe?
            • -
            • A: No, APK Netflix Premium is not safe. It may contain malware or viruses that can harm your device or data. It may also have some bugs or glitches that can affect the performance or functionality of the app. You should always scan the APK file with a reliable antivirus software before installing it on your device.
            • -
            • Q: Is APK Netflix Premium free?
            • -
            • A: Yes, APK Netflix Premium is free. You do not need to pay any subscription fees or sign in with any credentials to watch Netflix with APK Netflix Premium. However, you may pay a price in terms of legal consequences, security risks, or quality issues.
            • -
            • Q: How do I update APK Netflix Premium?
            • -
            • A: To update APK Netflix Premium, you need to download and install the latest version of the APK file from a trusted source. You should also uninstall the previous version of the app before installing the new one.
            • -
            • Q: Can I use APK Netflix Premium on other devices?
            • -
            • A: No, APK Netflix Premium is only compatible with Android devices. You cannot use it on iOS devices, Windows devices, Mac devices, smart TVs, gaming consoles, streaming devices , etc. You can only use APK Netflix Premium on Android smartphones, tablets, or emulators.
            • -
            -

            I hope this article has helped you understand what APK Netflix Premium is and how to use it. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy watching!

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cricket League Mod APK How to Get Allways Perfect Shots and More.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cricket League Mod APK How to Get Allways Perfect Shots and More.md deleted file mode 100644 index 6d19edf1f161b09a399de2aca1f69a46b9f39245..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cricket League Mod APK How to Get Allways Perfect Shots and More.md +++ /dev/null @@ -1,107 +0,0 @@ - -

            Cricket League Mod APK Free Download: A Guide for Android Users

            -

            Cricket is one of the most popular sports in the world, especially in countries like India, Pakistan, Australia, and England. If you are a fan of cricket and want to enjoy a realistic and immersive cricket game on your Android device, you might have heard of Cricket League. This is a game developed by Miniclip, a well-known company that produces many casual and fun games for mobile platforms.

            -

            cricket league mod apk free download


            DOWNLOAD 🗹 https://bltlly.com/2uOso4



            -

            Cricket League is a game that lets you create your own team, customize your players, choose your stadium, and compete in various tournaments and leagues. You can also play online with other players from around the world and challenge them in real-time matches. The game has stunning graphics, realistic physics, and smooth controls that make it one of the best cricket games available on Android.

            -

            However, as with many free-to-play games, Cricket League also has some limitations and drawbacks that might affect your gaming experience. For example, you need to earn coins and gems to unlock new players, stadiums, and equipment. You also have to watch ads to get some extra rewards or skip some waiting time. And if you want to access some premium features, you need to pay real money or use a rooted device.

            -

            That's why some players look for a way to download the mod apk version of Cricket League. A mod apk is a modified version of the original game that has some changes or additions that give you some advantages or benefits. For example, a mod apk might give you unlimited coins and gems, unlock all players and stadiums, remove ads, or bypass root detection. In this article, we will show you how to download and install Cricket League mod apk for free on your Android device. We will also discuss the features, pros and cons, and alternatives of this mod apk. So, let's get started!

            -

            Features of Cricket League Mod APK

            -

            Cricket League mod apk is a version of the game that has been modified by some third-party developers or hackers to give you some extra features that are not available in the original game. Here are some of the features that you can enjoy with this mod apk:

            -
              -
            • Unlimited coins and gems: Coins and gems are the main currencies in Cricket League. You need them to unlock new players, stadiums, equipment, and other items. You can earn them by playing matches, completing achievements, watching ads, or buying them with real money. However, with this mod apk, you will get unlimited coins and gems for free. You can use them to buy anything you want without worrying about running out of them.
            • -
            • All players and stadiums unlocked: One of the fun aspects of Cricket League is that you can create your own team and customize your players. You can choose from different countries, names, faces, hairstyles, outfits, skills, and abilities. You can also choose from different stadiums, each with its own characteristics and atmosphere. However, not all players and stadiums are available from the start. You need to unlock them by spending coins and gems or by reaching certain levels. But with this mod apk, you will have access to all players and stadiums from the beginning. You can create your dream team and play in any stadium you want.
            • -
            • No ads and no root required: Ads are a common feature in many free-to-play games. They are a way for the developers to generate some revenue and keep the game running. However, they can also be annoying and distracting for the players. They can interrupt your gameplay, consume your data, or slow down your device. Some games also require you to have a rooted device to access some premium features or bypass some restrictions. Rooting your device can be risky and complicated, as it can void your warranty, expose your device to malware, or cause some errors. But with this mod apk, you don't have to worry about any of that. You can enjoy the game without any ads or root requirement.
            • -
            -

            How to Download and Install Cricket League Mod APK

            -

            If you are interested in downloading and installing Cricket League mod apk on your Android device, you need to follow these simple steps:

            -

            Cricket League v1.0.5 MOD APK download
            -Download Cricket League Mod APK for Android
            -Cricket League Mod APK with unlimited coins and gems
            -How to install Cricket League Mod APK on your device
            -Cricket League Mod APK latest version free download
            -Cricket League Mod APK hack cheats for free
            -Cricket League Mod APK offline mode available
            -Cricket League Mod APK no root required
            -Cricket League Mod APK unlimited money and diamonds
            -Cricket League Mod APK all features unlocked
            -Cricket League Mod APK gameplay and review
            -Cricket League Mod APK best cricket game for Android
            -Cricket League Mod APK free shopping and upgrades
            -Cricket League Mod APK easy and fast download
            -Cricket League Mod APK safe and secure download
            -Cricket League Mod APK new update 2023
            -Cricket League Mod APK full HD graphics and sound
            -Cricket League Mod APK realistic physics and animations
            -Cricket League Mod APK online multiplayer mode
            -Cricket League Mod APK customise your team and players
            -Cricket League Mod APK various modes and tournaments
            -Cricket League Mod APK tips and tricks to win matches
            -Cricket League Mod APK support all Android devices
            -Cricket League Mod APK no ads and no surveys
            -Cricket League Mod APK direct download link
            -Download Cricket League v1.0.5 MOD APK (Allways Perfect)
            -Cricket League v1.0.5 MOD APK (Allways Perfect) free for Android
            -Cricket League v1.0.5 MOD APK (Allways Perfect) unlimited everything
            -Cricket League v1.0.5 MOD APK (Allways Perfect) modded by iamsikey
            -Cricket League v1.0.5 MOD APK (Allways Perfect) happymod download
            -Happymod download Cricket League v1.0.5 MOD APK (Allways Perfect)
            -Happymod download Cricket League Mod APK latest version
            -Happymod download Cricket League Mod APK for free
            -Happymod download Cricket League Mod APK with 3x speed
            -Happymod download Cricket League Mod APK with high quality mods
            -Happymod download best cricket games for Android
            -Happymod download best cricket league mod apk games
            -Happymod download best cricket league mod apk hacks
            -Happymod download best cricket league mod apk cheats
            -Happymod download best cricket league mod apk unlimited coins and gems

            -
              -
            1. Enable unknown sources on your device: Since this mod apk is not available on the official Google Play Store, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
            2. -
            3. Download the mod apk file from a trusted source: The next step is to download the mod apk file from a reliable and safe source. There are many websites that offer mod apk files for various games, but not all of them are trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading any mod apk file. You can use Google or any other search engine to find some reputable sources that provide Cricket League mod apk file. Alternatively, you can use the link below to download the mod apk file directly from our website.
            4. -
            5. Install the mod apk file and launch the game: After downloading the mod apk file, you need to install it on your device. To do this, locate the file in your device storage and tap on it. You might see a warning message that says "This type of file can harm your device". Don't worry, this is just a standard message that appears when you install apps from unknown sources. Just tap on "OK" and proceed with the installation. Once the installation is complete, you can launch the game and enjoy the mod apk features.
            6. -
            -

            Pros and Cons of Cricket League Mod APK

            -

            Cricket League mod apk is a great way to enhance your gaming experience and have more fun with the game. However, it also has some drawbacks that you should be aware of before using it. Here are some of the pros and cons of Cricket League mod apk:

            - - - - - - - - - - - - - - - - - -
            ProsCons
            - Enhanced gameplay: With unlimited coins and gems, you can unlock all players and stadiums, customize your team, and buy any item you want. You can also enjoy the game without any ads or root requirement.- Potential security risks: Since this mod apk is not from the official source, it might contain some malicious code or hidden functions that can harm your device or compromise your privacy. You should always scan the file with an antivirus before installing it and avoid granting any unnecessary permissions to the app.
            - More customization: You can create your own team and choose from different countries, names, faces, hairstyles, outfits, skills, and abilities. You can also choose from different stadiums, each with its own characteristics and atmosphere.- Compatibility issues: This mod apk might not work well with some devices or Android versions. It might cause some errors, crashes, or glitches that can affect your gameplay. You should always check the compatibility of the mod apk with your device before installing it.
            - Free resources: You don't have to spend any real money or watch any ads to get coins and gems. You can get them for free with this mod apk.- Legal concerns: This mod apk is not authorized by the original developers of Cricket League. It violates their terms of service and intellectual property rights. Using this mod apk might result in some legal actions or consequences from them.
            -

            Alternatives to Cricket League Mod APK

            If you are looking for some alternatives to Cricket League mod apk, you might want to try some of these other cricket games for Android. These games are also free to play and offer different features and modes that might suit your preferences and tastes. Here are some of the best cricket games for Android that you can download and play:

            -
              -
            • Real Cricket 20: This is another realistic and immersive cricket game that features high-quality graphics, authentic player faces, great-looking team jerseys, and accurate live stadiums. You can choose from various match types, such as ODI, T20, and Test, as well as real-time multiplayer games where you can play 1v1, 2v2, or co-op with your friends. You can also participate in various tournaments and events, such as the World Cup, the Asia Cup, and the Big Bash. You can also bid on your favorite players and create your own team in the auction mode.
            • -
            • World Cricket Championship 2: This is one of the most popular cricket games on Android, with over 100 million downloads and a 4.3-star rating on the Google Play Store. This game offers a stunning gameplay experience with realistic animations and physics, motion-captured strokes, and professional commentary. You can customize your players, choose your stadium, and compete in various leagues and tournaments. You can also play online with other players from around the world and challenge them in real-time matches.
            • -
            • Stick Cricket Super League: If you prefer a more casual and fun cricket game, you might enjoy Stick Cricket Super League. This is a game that features cartoonish graphics, simple controls, and humorous gameplay. You can create your own character, choose your team name, and compete in a fast-paced T20 league. You can also recruit star players, upgrade your skills, and smash sixes all over the park.
            • -
            -

            Conclusion

            -

            Cricket League mod apk is a modified version of the original game that gives you some extra features and benefits that are not available in the official game. You can get unlimited coins and gems, unlock all players and stadiums, remove ads, and bypass root detection with this mod apk. However, you should also be aware of the potential security risks, compatibility issues, and legal concerns that come with using this mod apk. You should always download the mod apk file from a trusted source and scan it with an antivirus before installing it on your device.

            -

            If you are looking for some alternatives to Cricket League mod apk, you can try some of the other cricket games for Android that we have mentioned above. These games are also free to play and offer different features and modes that might suit your preferences and tastes. You can enjoy realistic and immersive cricket games or casual and fun cricket games on your Android device.

            -

            We hope this article has helped you learn more about Cricket League mod apk and how to download and install it on your device. We also hope you have found some of the best cricket games for Android that you can play instead of or along with Cricket League mod apk. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

            -

            FAQs

            -

            Here are some of the frequently asked questions about Cricket League mod apk:

            -
              -
            1. Is Cricket League mod apk safe to use?
            2. -

              Cricket League mod apk is not from the official source, so it might contain some malicious code or hidden functions that can harm your device or compromise your privacy. You should always scan the file with an antivirus before installing it on your device and avoid granting any unnecessary permissions to the app.

              -
            3. How can I update Cricket League mod apk?
            4. -

              Cricket League mod apk might not be compatible with the latest version of the original game or the Android system. You might need to uninstall the mod apk and download a new version from a trusted source. However, you might lose your progress or data if you do so.

              -
            5. Can I play Cricket League mod apk online with other players?
            6. -

              Cricket League mod apk might not work well with the online mode of the original game. You might face some errors, crashes, or glitches when playing online with other players. You might also get banned or suspended by the developers if they detect that you are using a mod apk.

              -
            7. What are the minimum requirements for Cricket League mod apk?
            8. -

              The minimum requirements for Cricket League mod apk are similar to those of the original game. You need an Android device with at least 4 GB of RAM, 1 GB of free storage space, and Android 5.0 or higher.

              -
            9. Where can I find more mod apk games for Android?
            10. -

              There are many websites that offer mod apk games for Android, but not all of them are trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading any mod apk game. You can use Google or any other search engine to find some reputable sources that provide mod apk games for Android. Alternatively, you can use the link below to find some of the best mod apk games for Android that we have reviewed and tested.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Alice Hd 1080p Online Movies.md b/spaces/tioseFevbu/cartoon-converter/scripts/Alice Hd 1080p Online Movies.md deleted file mode 100644 index 030d1e283ec63a7b6c2d6c65c389c3cc071c6ff0..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Alice Hd 1080p Online Movies.md +++ /dev/null @@ -1,24 +0,0 @@ - -

            How to Watch Alice in HD 1080p Online for Free

            -

            Alice is a movie that has captivated audiences with its stunning visuals, intriguing plot, and stellar cast. Whether you are a fan of the original Alice books by Lewis Carroll, or you just love a good fantasy adventure, you might be wondering how to watch Alice in HD 1080p online for free.

            -

            Alice hd 1080p online movies


            Download Ziphttps://urlcod.com/2uHy4R



            -

            Fortunately, there are some options available for you to enjoy this movie without breaking the bank. Here are some of the best ways to stream Alice in HD 1080p online for free:

            -
              -
            • Starz: If you have a subscription to Starz, you can watch Alice on their website or app. Starz offers a 7-day free trial for new customers, so you can sign up and watch Alice without paying anything. Starz also has other movies and shows that you might like, such as Outlander, Power, and American Gods.
            • -
            • Internet Archive: If you don't mind watching an older version of Alice, you can check out the Internet Archive. This website has a collection of public domain and creative commons media, including the 1951 animated musical fantasy film Alice in Wonderland by Walt Disney Productions. This film is based on the Alice books by Lewis Carroll and features the voices of Kathryn Beaumont as Alice, Sterling Holloway as the Cheshire Cat, Verna Felton as the Queen of Hearts, and Ed Wynn as the Mad Hatter. You can watch this film online or download it for free.
            • -
            • Dailymotion: Another option for watching Alice online is Dailymotion. This is a video-sharing platform that hosts user-generated and licensed content. You can find some clips and trailers of Alice on Dailymotion, as well as some full-length movies uploaded by users. However, be aware that the quality and legality of these videos may vary, and some of them may be removed due to copyright infringement.
            • -
            -

            These are some of the ways to watch Alice in HD 1080p online for free. However, if you want to support the creators and actors of this movie, you might want to consider renting or buying it from legitimate sources, such as Amazon Video, Google Play Movies, YouTube, Vudu, Microsoft Store, Redbox, or Apple TV. These platforms offer Alice in HD 1080p for a reasonable price and guarantee a high-quality viewing experience.

            -

            Whatever option you choose, we hope you enjoy watching Alice in HD 1080p online for free!

            - -

            Alice: A Movie Inspired by True Events

            -

            One of the most surprising and shocking aspects of Alice is that it is inspired by true events. The movie is based on the very real history of black Americans still being enslaved even after the Emancipation Proclamation. The most prominent example of this, on which the movie is based, is the life of Mae Louise Miller.

            -

            -

            Mae Louise Miller was born into slavery in 1923 on a plantation in Georgia. She didn't get her freedom until 1961, when she ran away from the plantation and found a family that rescued her and her children. She and her family were unaware that things had changed, as they had no TV or other access to the outside world; they just assumed their situation was like that for all black people.

            -

            Mae Louise Miller's story was documented by journalist John Sibley in a series of articles for The New York Times in 1963. He also wrote a book about her called Without Sanctuary: The Story of Mae Louise Miller. Her story inspired other writers and filmmakers to explore the hidden history of slavery in America.

            -

            Alice: A Movie with a Stellar Cast and Crew

            -

            Alice is not only a movie with a powerful message, but also a movie with a stellar cast and crew. The movie stars Keke Palmer as Alice, Jonny Lee Miller as Paul Bennett, Common as Frank, Gaius Charles as Joseph, and Alicia Witt as Rachel. The movie also features Natasha Yvette Williams as Ruth, Madelon Curtis as Mrs Bennett, Jaxon Goldenberg as Daniel Bennett, Kenneth Farmer as Moses, Craig Stark as Aaron, and David Andrew Nash as Danny.

            -

            The movie is written and directed by Krystin Ver Linden, in her directorial debut. She also serves as a producer along with Peter Lawson. The movie has a score composed by Common, who also contributes to the soundtrack with his song "Brother's Gonna Work It Out". The movie has cinematography by Alex Disenhof and editing by Byron Smith.

            -

            Alice had its world premiere at the 2022 Sundance Film Festival on January 23, 2022. It was released in the United States on March 18, 2022, by Roadside Attractions and Vertical Entertainment. The movie received mixed reviews from critics, but was praised for its performances and its social relevance. It was nominated in three categories for the NAACP Image Awards, and nominated for the Saturn Award for best independent film.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/tj5miniop/distilgpt2/README.md b/spaces/tj5miniop/distilgpt2/README.md deleted file mode 100644 index 71b5cb367f08519bef8dfa9c04609f29a71928ff..0000000000000000000000000000000000000000 --- a/spaces/tj5miniop/distilgpt2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Distilgpt2 -emoji: 🔥 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py deleted file mode 100644 index 7fe1a3e33a3adbfd9ad1126a22d7175154ebc200..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py +++ /dev/null @@ -1,190 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import base64 -import io -import json -import zlib - -from pip._vendor import msgpack -from pip._vendor.requests.structures import CaseInsensitiveDict - -from .compat import HTTPResponse, pickle, text_type - - -def _b64_decode_bytes(b): - return base64.b64decode(b.encode("ascii")) - - -def _b64_decode_str(s): - return _b64_decode_bytes(s).decode("utf8") - - -_default_body_read = object() - - -class Serializer(object): - def dumps(self, request, response, body=None): - response_headers = CaseInsensitiveDict(response.headers) - - if body is None: - # When a body isn't passed in, we'll read the response. We - # also update the response with a new file handler to be - # sure it acts as though it was never read. - body = response.read(decode_content=False) - response._fp = io.BytesIO(body) - - # NOTE: This is all a bit weird, but it's really important that on - # Python 2.x these objects are unicode and not str, even when - # they contain only ascii. The problem here is that msgpack - # understands the difference between unicode and bytes and we - # have it set to differentiate between them, however Python 2 - # doesn't know the difference. Forcing these to unicode will be - # enough to have msgpack know the difference. - data = { - u"response": { - u"body": body, # Empty bytestring if body is stored separately - u"headers": dict( - (text_type(k), text_type(v)) for k, v in response.headers.items() - ), - u"status": response.status, - u"version": response.version, - u"reason": text_type(response.reason), - u"strict": response.strict, - u"decode_content": response.decode_content, - } - } - - # Construct our vary headers - data[u"vary"] = {} - if u"vary" in response_headers: - varied_headers = response_headers[u"vary"].split(",") - for header in varied_headers: - header = text_type(header).strip() - header_value = request.headers.get(header, None) - if header_value is not None: - header_value = text_type(header_value) - data[u"vary"][header] = header_value - - return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) - - def loads(self, request, data, body_file=None): - # Short circuit if we've been given an empty set of data - if not data: - return - - # Determine what version of the serializer the data was serialized - # with - try: - ver, data = data.split(b",", 1) - except ValueError: - ver = b"cc=0" - - # Make sure that our "ver" is actually a version and isn't a false - # positive from a , being in the data stream. - if ver[:3] != b"cc=": - data = ver + data - ver = b"cc=0" - - # Get the version number out of the cc=N - ver = ver.split(b"=", 1)[-1].decode("ascii") - - # Dispatch to the actual load method for the given version - try: - return getattr(self, "_loads_v{}".format(ver))(request, data, body_file) - - except AttributeError: - # This is a version we don't have a loads function for, so we'll - # just treat it as a miss and return None - return - - def prepare_response(self, request, cached, body_file=None): - """Verify our vary headers match and construct a real urllib3 - HTTPResponse object. - """ - # Special case the '*' Vary value as it means we cannot actually - # determine if the cached response is suitable for this request. - # This case is also handled in the controller code when creating - # a cache entry, but is left here for backwards compatibility. - if "*" in cached.get("vary", {}): - return - - # Ensure that the Vary headers for the cached response match our - # request - for header, value in cached.get("vary", {}).items(): - if request.headers.get(header, None) != value: - return - - body_raw = cached["response"].pop("body") - - headers = CaseInsensitiveDict(data=cached["response"]["headers"]) - if headers.get("transfer-encoding", "") == "chunked": - headers.pop("transfer-encoding") - - cached["response"]["headers"] = headers - - try: - if body_file is None: - body = io.BytesIO(body_raw) - else: - body = body_file - except TypeError: - # This can happen if cachecontrol serialized to v1 format (pickle) - # using Python 2. A Python 2 str(byte string) will be unpickled as - # a Python 3 str (unicode string), which will cause the above to - # fail with: - # - # TypeError: 'str' does not support the buffer interface - body = io.BytesIO(body_raw.encode("utf8")) - - return HTTPResponse(body=body, preload_content=False, **cached["response"]) - - def _loads_v0(self, request, data, body_file=None): - # The original legacy cache data. This doesn't contain enough - # information to construct everything we need, so we'll treat this as - # a miss. - return - - def _loads_v1(self, request, data, body_file=None): - try: - cached = pickle.loads(data) - except ValueError: - return - - return self.prepare_response(request, cached, body_file) - - def _loads_v2(self, request, data, body_file=None): - assert body_file is None - try: - cached = json.loads(zlib.decompress(data).decode("utf8")) - except (ValueError, zlib.error): - return - - # We need to decode the items that we've base64 encoded - cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) - cached["response"]["headers"] = dict( - (_b64_decode_str(k), _b64_decode_str(v)) - for k, v in cached["response"]["headers"].items() - ) - cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) - cached["vary"] = dict( - (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) - for k, v in cached["vary"].items() - ) - - return self.prepare_response(request, cached, body_file) - - def _loads_v3(self, request, data, body_file): - # Due to Python 2 encoding issues, it's impossible to know for sure - # exactly how to load v3 entries, thus we'll treat these as a miss so - # that they get rewritten out as v4 entries. - return - - def _loads_v4(self, request, data, body_file=None): - try: - cached = msgpack.loads(data, raw=False) - except ValueError: - return - - return self.prepare_response(request, cached, body_file) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/codec.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/codec.py deleted file mode 100644 index 1ca9ba62c208527b796b49306f4b8c95eb868a51..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/codec.py +++ /dev/null @@ -1,112 +0,0 @@ -from .core import encode, decode, alabel, ulabel, IDNAError -import codecs -import re -from typing import Tuple, Optional - -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class Codec(codecs.Codec): - - def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return b"", 0 - - return encode(data), len(data) - - def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return '', 0 - - return decode(data), len(data) - -class IncrementalEncoder(codecs.BufferedIncrementalEncoder): - def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return "", 0 - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(alabel(label)) - if size: - size += 1 - size += len(label) - - # Join with U+002E - result_str = '.'.join(result) + trailing_dot # type: ignore - size += len(trailing_dot) - return result_str, size - -class IncrementalDecoder(codecs.BufferedIncrementalDecoder): - def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return ('', 0) - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(ulabel(label)) - if size: - size += 1 - size += len(label) - - result_str = '.'.join(result) + trailing_dot - size += len(trailing_dot) - return (result_str, size) - - -class StreamWriter(Codec, codecs.StreamWriter): - pass - - -class StreamReader(Codec, codecs.StreamReader): - pass - - -def getregentry() -> codecs.CodecInfo: - # Compatibility as a search_function for codecs.register() - return codecs.CodecInfo( - name='idna', - encode=Codec().encode, # type: ignore - decode=Codec().decode, # type: ignore - incrementalencoder=IncrementalEncoder, - incrementaldecoder=IncrementalDecoder, - streamwriter=StreamWriter, - streamreader=StreamReader, - ) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py deleted file mode 100644 index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py +++ /dev/null @@ -1,16 +0,0 @@ -from typing import List, TypeVar - -T = TypeVar("T") - - -class Stack(List[T]): - """A small shim over builtin list.""" - - @property - def top(self) -> T: - """Get top of stack.""" - return self[-1] - - def push(self, item: T) -> None: - """Push an item on to the stack (append in stack nomenclature).""" - self.append(item) diff --git a/spaces/tmaham/DS-Fusion-Express/ldm/modules/distributions/distributions.py b/spaces/tmaham/DS-Fusion-Express/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/tmaham/DS-Fusion-Express/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/tomofi/MMOCR/mmocr/core/evaluation/utils.py b/spaces/tomofi/MMOCR/mmocr/core/evaluation/utils.py deleted file mode 100644 index bb02b096f2de12612fe181626ce2aad4eccc6a91..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/core/evaluation/utils.py +++ /dev/null @@ -1,547 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -from shapely.geometry import Polygon as plg - -import mmocr.utils as utils - - -def ignore_pred(pred_boxes, gt_ignored_index, gt_polys, precision_thr): - """Ignore the predicted box if it hits any ignored ground truth. - - Args: - pred_boxes (list[ndarray or list]): The predicted boxes of one image. - gt_ignored_index (list[int]): The ignored ground truth index list. - gt_polys (list[Polygon]): The polygon list of one image. - precision_thr (float): The precision threshold. - - Returns: - pred_polys (list[Polygon]): The predicted polygon list. - pred_points (list[list]): The predicted box list represented - by point sequences. - pred_ignored_index (list[int]): The ignored text index list. - """ - - assert isinstance(pred_boxes, list) - assert isinstance(gt_ignored_index, list) - assert isinstance(gt_polys, list) - assert 0 <= precision_thr <= 1 - - pred_polys = [] - pred_points = [] - pred_ignored_index = [] - - gt_ignored_num = len(gt_ignored_index) - # get detection polygons - for box_id, box in enumerate(pred_boxes): - poly = points2polygon(box) - pred_polys.append(poly) - pred_points.append(box) - - if gt_ignored_num < 1: - continue - - # ignore the current detection box - # if its overlap with any ignored gt > precision_thr - for ignored_box_id in gt_ignored_index: - ignored_box = gt_polys[ignored_box_id] - inter_area = poly_intersection(poly, ignored_box) - area = poly.area - precision = 0 if area == 0 else inter_area / area - if precision > precision_thr: - pred_ignored_index.append(box_id) - break - - return pred_polys, pred_points, pred_ignored_index - - -def compute_hmean(accum_hit_recall, accum_hit_prec, gt_num, pred_num): - """Compute hmean given hit number, ground truth number and prediction - number. - - Args: - accum_hit_recall (int|float): Accumulated hits for computing recall. - accum_hit_prec (int|float): Accumulated hits for computing precision. - gt_num (int): Ground truth number. - pred_num (int): Prediction number. - - Returns: - recall (float): The recall value. - precision (float): The precision value. - hmean (float): The hmean value. - """ - - assert isinstance(accum_hit_recall, (float, int)) - assert isinstance(accum_hit_prec, (float, int)) - - assert isinstance(gt_num, int) - assert isinstance(pred_num, int) - assert accum_hit_recall >= 0.0 - assert accum_hit_prec >= 0.0 - assert gt_num >= 0.0 - assert pred_num >= 0.0 - - if gt_num == 0: - recall = 1.0 - precision = 0.0 if pred_num > 0 else 1.0 - else: - recall = float(accum_hit_recall) / gt_num - precision = 0.0 if pred_num == 0 else float(accum_hit_prec) / pred_num - - denom = recall + precision - - hmean = 0.0 if denom == 0 else (2.0 * precision * recall / denom) - - return recall, precision, hmean - - -def box2polygon(box): - """Convert box to polygon. - - Args: - box (ndarray or list): A ndarray or a list of shape (4) - that indicates 2 points. - - Returns: - polygon (Polygon): A polygon object. - """ - if isinstance(box, list): - box = np.array(box) - - assert isinstance(box, np.ndarray) - assert box.size == 4 - boundary = np.array( - [box[0], box[1], box[2], box[1], box[2], box[3], box[0], box[3]]) - - point_mat = boundary.reshape([-1, 2]) - return plg(point_mat) - - -def points2polygon(points): - """Convert k points to 1 polygon. - - Args: - points (ndarray or list): A ndarray or a list of shape (2k) - that indicates k points. - - Returns: - polygon (Polygon): A polygon object. - """ - if isinstance(points, list): - points = np.array(points) - - assert isinstance(points, np.ndarray) - assert (points.size % 2 == 0) and (points.size >= 8) - - point_mat = points.reshape([-1, 2]) - return plg(point_mat) - - -def poly_make_valid(poly): - """Convert a potentially invalid polygon to a valid one by eliminating - self-crossing or self-touching parts. - - Args: - poly (Polygon): A polygon needed to be converted. - - Returns: - A valid polygon. - """ - return poly if poly.is_valid else poly.buffer(0) - - -def poly_intersection(poly_det, poly_gt, invalid_ret=None, return_poly=False): - """Calculate the intersection area between two polygon. - - Args: - poly_det (Polygon): A polygon predicted by detector. - poly_gt (Polygon): A gt polygon. - invalid_ret (None|float|int): The return value when the invalid polygon - exists. If it is not specified, the function allows the computation - to proceed with invalid polygons by cleaning the their - self-touching or self-crossing parts. - return_poly (bool): Whether to return the polygon of the intersection - area. - - Returns: - intersection_area (float): The intersection area between two polygons. - poly_obj (Polygon, optional): The Polygon object of the intersection - area. Set as `None` if the input is invalid. - """ - assert isinstance(poly_det, plg) - assert isinstance(poly_gt, plg) - assert invalid_ret is None or isinstance(invalid_ret, float) or \ - isinstance(invalid_ret, int) - - if invalid_ret is None: - poly_det = poly_make_valid(poly_det) - poly_gt = poly_make_valid(poly_gt) - - poly_obj = None - area = invalid_ret - if poly_det.is_valid and poly_gt.is_valid: - poly_obj = poly_det.intersection(poly_gt) - area = poly_obj.area - return (area, poly_obj) if return_poly else area - - -def poly_union(poly_det, poly_gt, invalid_ret=None, return_poly=False): - """Calculate the union area between two polygon. - Args: - poly_det (Polygon): A polygon predicted by detector. - poly_gt (Polygon): A gt polygon. - invalid_ret (None|float|int): The return value when the invalid polygon - exists. If it is not specified, the function allows the computation - to proceed with invalid polygons by cleaning the their - self-touching or self-crossing parts. - return_poly (bool): Whether to return the polygon of the intersection - area. - - Returns: - union_area (float): The union area between two polygons. - poly_obj (Polygon|MultiPolygon, optional): The Polygon or MultiPolygon - object of the union of the inputs. The type of object depends on - whether they intersect or not. Set as `None` if the input is - invalid. - """ - assert isinstance(poly_det, plg) - assert isinstance(poly_gt, plg) - assert invalid_ret is None or isinstance(invalid_ret, float) or \ - isinstance(invalid_ret, int) - - if invalid_ret is None: - poly_det = poly_make_valid(poly_det) - poly_gt = poly_make_valid(poly_gt) - - poly_obj = None - area = invalid_ret - if poly_det.is_valid and poly_gt.is_valid: - poly_obj = poly_det.union(poly_gt) - area = poly_obj.area - return (area, poly_obj) if return_poly else area - - -def boundary_iou(src, target, zero_division=0): - """Calculate the IOU between two boundaries. - - Args: - src (list): Source boundary. - target (list): Target boundary. - zero_division (int|float): The return value when invalid - boundary exists. - - Returns: - iou (float): The iou between two boundaries. - """ - assert utils.valid_boundary(src, False) - assert utils.valid_boundary(target, False) - src_poly = points2polygon(src) - target_poly = points2polygon(target) - - return poly_iou(src_poly, target_poly, zero_division=zero_division) - - -def poly_iou(poly_det, poly_gt, zero_division=0): - """Calculate the IOU between two polygons. - - Args: - poly_det (Polygon): A polygon predicted by detector. - poly_gt (Polygon): A gt polygon. - zero_division (int|float): The return value when invalid - polygon exists. - - Returns: - iou (float): The IOU between two polygons. - """ - assert isinstance(poly_det, plg) - assert isinstance(poly_gt, plg) - area_inters = poly_intersection(poly_det, poly_gt) - area_union = poly_union(poly_det, poly_gt) - return area_inters / area_union if area_union != 0 else zero_division - - -def one2one_match_ic13(gt_id, det_id, recall_mat, precision_mat, recall_thr, - precision_thr): - """One-to-One match gt and det with icdar2013 standards. - - Args: - gt_id (int): The ground truth id index. - det_id (int): The detection result id index. - recall_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the recall ratio of gt i to det j. - precision_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the precision ratio of gt i to det j. - recall_thr (float): The recall threshold. - precision_thr (float): The precision threshold. - Returns: - True|False: Whether the gt and det are matched. - """ - assert isinstance(gt_id, int) - assert isinstance(det_id, int) - assert isinstance(recall_mat, np.ndarray) - assert isinstance(precision_mat, np.ndarray) - assert 0 <= recall_thr <= 1 - assert 0 <= precision_thr <= 1 - - cont = 0 - for i in range(recall_mat.shape[1]): - if recall_mat[gt_id, - i] > recall_thr and precision_mat[gt_id, - i] > precision_thr: - cont += 1 - if cont != 1: - return False - - cont = 0 - for i in range(recall_mat.shape[0]): - if recall_mat[i, det_id] > recall_thr and precision_mat[ - i, det_id] > precision_thr: - cont += 1 - if cont != 1: - return False - - if recall_mat[gt_id, det_id] > recall_thr and precision_mat[ - gt_id, det_id] > precision_thr: - return True - - return False - - -def one2many_match_ic13(gt_id, recall_mat, precision_mat, recall_thr, - precision_thr, gt_match_flag, det_match_flag, - det_ignored_index): - """One-to-Many match gt and detections with icdar2013 standards. - - Args: - gt_id (int): gt index. - recall_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the recall ratio of gt i to det j. - precision_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the precision ratio of gt i to det j. - recall_thr (float): The recall threshold. - precision_thr (float): The precision threshold. - gt_match_flag (ndarray): An array indicates each gt matched already. - det_match_flag (ndarray): An array indicates each box has been - matched already or not. - det_ignored_index (list): A list indicates each detection box can be - ignored or not. - - Returns: - tuple (True|False, list): The first indicates the gt is matched or not; - the second is the matched detection ids. - """ - assert isinstance(gt_id, int) - assert isinstance(recall_mat, np.ndarray) - assert isinstance(precision_mat, np.ndarray) - assert 0 <= recall_thr <= 1 - assert 0 <= precision_thr <= 1 - - assert isinstance(gt_match_flag, list) - assert isinstance(det_match_flag, list) - assert isinstance(det_ignored_index, list) - - many_sum = 0. - det_ids = [] - for det_id in range(recall_mat.shape[1]): - if gt_match_flag[gt_id] == 0 and det_match_flag[ - det_id] == 0 and det_id not in det_ignored_index: - if precision_mat[gt_id, det_id] >= precision_thr: - many_sum += recall_mat[gt_id, det_id] - det_ids.append(det_id) - if many_sum >= recall_thr: - return True, det_ids - return False, [] - - -def many2one_match_ic13(det_id, recall_mat, precision_mat, recall_thr, - precision_thr, gt_match_flag, det_match_flag, - gt_ignored_index): - """Many-to-One match gt and detections with icdar2013 standards. - - Args: - det_id (int): Detection index. - recall_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the recall ratio of gt i to det j. - precision_mat (ndarray): `gt_num x det_num` matrix with element (i,j) - being the precision ratio of gt i to det j. - recall_thr (float): The recall threshold. - precision_thr (float): The precision threshold. - gt_match_flag (ndarray): An array indicates each gt has been matched - already. - det_match_flag (ndarray): An array indicates each detection box has - been matched already or not. - gt_ignored_index (list): A list indicates each gt box can be ignored - or not. - - Returns: - tuple (True|False, list): The first indicates the detection is matched - or not; the second is the matched gt ids. - """ - assert isinstance(det_id, int) - assert isinstance(recall_mat, np.ndarray) - assert isinstance(precision_mat, np.ndarray) - assert 0 <= recall_thr <= 1 - assert 0 <= precision_thr <= 1 - - assert isinstance(gt_match_flag, list) - assert isinstance(det_match_flag, list) - assert isinstance(gt_ignored_index, list) - many_sum = 0. - gt_ids = [] - for gt_id in range(recall_mat.shape[0]): - if gt_match_flag[gt_id] == 0 and det_match_flag[ - det_id] == 0 and gt_id not in gt_ignored_index: - if recall_mat[gt_id, det_id] >= recall_thr: - many_sum += precision_mat[gt_id, det_id] - gt_ids.append(gt_id) - if many_sum >= precision_thr: - return True, gt_ids - return False, [] - - -def points_center(points): - - assert isinstance(points, np.ndarray) - assert points.size % 2 == 0 - - points = points.reshape([-1, 2]) - return np.mean(points, axis=0) - - -def point_distance(p1, p2): - assert isinstance(p1, np.ndarray) - assert isinstance(p2, np.ndarray) - - assert p1.size == 2 - assert p2.size == 2 - - dist = np.square(p2 - p1) - dist = np.sum(dist) - dist = np.sqrt(dist) - return dist - - -def box_center_distance(b1, b2): - assert isinstance(b1, np.ndarray) - assert isinstance(b2, np.ndarray) - return point_distance(points_center(b1), points_center(b2)) - - -def box_diag(box): - assert isinstance(box, np.ndarray) - assert box.size == 8 - - return point_distance(box[0:2], box[4:6]) - - -def filter_2dlist_result(results, scores, score_thr): - """Find out detected results whose score > score_thr. - - Args: - results (list[list[float]]): The result list. - score (list): The score list. - score_thr (float): The score threshold. - Returns: - valid_results (list[list[float]]): The valid results. - valid_score (list[float]): The scores which correspond to the valid - results. - """ - assert isinstance(results, list) - assert len(results) == len(scores) - assert isinstance(score_thr, float) - assert 0 <= score_thr <= 1 - - inds = np.array(scores) > score_thr - valid_results = [results[idx] for idx in np.where(inds)[0].tolist()] - valid_scores = [scores[idx] for idx in np.where(inds)[0].tolist()] - return valid_results, valid_scores - - -def filter_result(results, scores, score_thr): - """Find out detected results whose score > score_thr. - - Args: - results (ndarray): The results matrix of shape (n, k). - score (ndarray): The score vector of shape (n,). - score_thr (float): The score threshold. - Returns: - valid_results (ndarray): The valid results of shape (m,k) with m<=n. - valid_score (ndarray): The scores which correspond to the - valid results. - """ - assert results.ndim == 2 - assert scores.shape[0] == results.shape[0] - assert isinstance(score_thr, float) - assert 0 <= score_thr <= 1 - - inds = scores > score_thr - valid_results = results[inds, :] - valid_scores = scores[inds] - return valid_results, valid_scores - - -def select_top_boundary(boundaries_list, scores_list, score_thr): - """Select poly boundaries with scores >= score_thr. - - Args: - boundaries_list (list[list[list[float]]]): List of boundaries. - The 1st, 2nd, and 3rd indices are for image, text and - vertice, respectively. - scores_list (list(list[float])): List of lists of scores. - score_thr (float): The score threshold to filter out bboxes. - - Returns: - selected_bboxes (list[list[list[float]]]): List of boundaries. - The 1st, 2nd, and 3rd indices are for image, text and vertice, - respectively. - """ - assert isinstance(boundaries_list, list) - assert isinstance(scores_list, list) - assert isinstance(score_thr, float) - assert len(boundaries_list) == len(scores_list) - assert 0 <= score_thr <= 1 - - selected_boundaries = [] - for boundary, scores in zip(boundaries_list, scores_list): - if len(scores) > 0: - assert len(scores) == len(boundary) - inds = [ - iter for iter in range(len(scores)) - if scores[iter] >= score_thr - ] - selected_boundaries.append([boundary[i] for i in inds]) - else: - selected_boundaries.append(boundary) - return selected_boundaries - - -def select_bboxes_via_score(bboxes_list, scores_list, score_thr): - """Select bboxes with scores >= score_thr. - - Args: - bboxes_list (list[ndarray]): List of bboxes. Each element is ndarray of - shape (n,8) - scores_list (list(list[float])): List of lists of scores. - score_thr (float): The score threshold to filter out bboxes. - - Returns: - selected_bboxes (list[ndarray]): List of bboxes. Each element is - ndarray of shape (m,8) with m<=n. - """ - assert isinstance(bboxes_list, list) - assert isinstance(scores_list, list) - assert isinstance(score_thr, float) - assert len(bboxes_list) == len(scores_list) - assert 0 <= score_thr <= 1 - - selected_bboxes = [] - for bboxes, scores in zip(bboxes_list, scores_list): - if len(scores) > 0: - assert len(scores) == bboxes.shape[0] - inds = [ - iter for iter in range(len(scores)) - if scores[iter] >= score_thr - ] - selected_bboxes.append(bboxes[inds, :]) - else: - selected_bboxes.append(bboxes) - return selected_bboxes diff --git a/spaces/tomofi/MMOCR/mmocr/models/ner/utils/__init__.py b/spaces/tomofi/MMOCR/mmocr/models/ner/utils/__init__.py deleted file mode 100644 index 076239cd389027258c1b755405c816e40cccae1c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/ner/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activations import GeluNew -from .bert import BertModel - -__all__ = ['BertModel', 'GeluNew'] diff --git a/spaces/tomofi/NDLOCR/cli/procs/base_proc.py b/spaces/tomofi/NDLOCR/cli/procs/base_proc.py deleted file mode 100644 index d8b0a69d413f66592736f967340e7592647f85c7..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/cli/procs/base_proc.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) 2022, National Diet Library, Japan -# -# This software is released under the CC BY 4.0. -# https://creativecommons.org/licenses/by/4.0/ - - -import copy -import cv2 -import os - - -class BaseInferenceProcess: - """ - 各推論処理を実行するプロセスクラスを作るためのメタクラス。 - - Attributes - ---------- - proc_name : str - 推論処理を実行するインスタンスが持つプロセス名。 - [実行される順序を表す数字+クラスごとの処理名]で構成されます。 - cfg : dict - 本推論実行における設定情報です。 - """ - def __init__(self, cfg, pid, proc_type='_base_prep'): - """ - Parameters - ---------- - cfg : dict - 本実行処理における設定情報です。 - pid : int - 実行される順序を表す数値。 - proc_type : str - クラスごとに定義されている処理名。 - """ - self.proc_name = str(pid) + proc_type - - if not self._is_valid_cfg(cfg): - raise ValueError('Configuration validation error.') - else: - self.cfg = cfg - - self.process_dump_dir = None - - return True - - def do(self, data_idx, input_data): - """ - 推論処理を実行する際にOcrInferencerクラスから呼び出される推論実行関数。 - 入力データのバリデーションや推論処理、推論結果の保存などが含まれます。 - 本処理は基本的に継承先では変更されないことを想定しています。 - - Parameters - ---------- - data_idx : int - 入力データのインデックス。 - 画像ファイル1つごとに入力データのリストが構成されます。 - input_data : dict - 推論処理を実行すつ対象の入力データ。 - - Returns - ------- - result : dict - 推論処理の結果を保持する辞書型データ。 - 基本的にinput_dataと同じ構造です。 - """ - # input data valudation check - if not self._is_valid_input(input_data): - raise ValueError('Input data validation error.') - - # run main inference process - result = self._run_process(input_data) - if result is None: - raise ValueError('Inference output error in {0}.'.format(self.proc_name)) - - # dump inference result - if self.cfg['dump']: - self._dump_result(input_data, result, data_idx) - - return result - - def _run_process(self, input_data): - """ - 推論処理の本体部分。 - 処理内容は継承先のクラスで実装されることを想定しています。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - result : dict - 推論処理の結果を保持する辞書型データ。 - 基本的にinput_dataと同じ構造です。 - """ - print('### Base Inference Process ###') - result = copy.deepcopy(input_data) - return result - - def _is_valid_cfg(self, cfg): - """ - 推論処理全体の設定情報ではなく、クラス単位の設定情報に対するバリデーション。 - バリデーションの内容は継承先のクラスで実装されることを想定しています。 - - Parameters - ---------- - cfg : dict - 本推論実行における設定情報です。 - - Returns - ------- - [変数なし] : bool - 設定情報が正しければTrue, そうでなければFalseを返します。 - """ - if cfg is None: - print('Given configuration data is None.') - return False - return True - - def _is_valid_input(self, input_data): - """ - 本クラスの推論処理における入力データのバリデーション。 - バリデーションの内容は継承先のクラスで実装されることを想定しています。 - - Parameters - ---------- - input_data : dict - 推論処理を実行する対象の入力データ。 - - Returns - ------- - [変数なし] : bool -  入力データが正しければTrue, そうでなければFalseを返します。 - """ - return True - - def _dump_result(self, input_data, result, data_idx): - """ - 本クラスの推論処理結果をファイルに保存します。 - dumpフラグが有効の場合にのみ実行されます。 - - Parameters - ---------- - input_data : dict - 推論処理に利用した入力データ。 - result : list - 推論処理の結果を保持するリスト型データ。 - 各要素は基本的にinput_dataと同じ構造の辞書型データです。 - data_idx : int - 入力データのインデックス。 - 画像ファイル1つごとに入力データのリストが構成されます。 - """ - - self.process_dump_dir = os.path.join(os.path.join(input_data['output_dir'], 'dump'), self.proc_name) - - for i, single_result in enumerate(result): - if 'img' in single_result.keys() and single_result['img'] is not None: - dump_img_name = os.path.basename(input_data['img_path']).split('.')[0] + '_' + str(data_idx) + '_' + str(i) + '.jpg' - self._dump_img_result(single_result, input_data['output_dir'], dump_img_name) - if 'xml' in single_result.keys() and single_result['xml'] is not None: - dump_xml_name = os.path.basename(input_data['img_path']).split('.')[0] + '_' + str(data_idx) + '_' + str(i) + '.xml' - self._dump_xml_result(single_result, input_data['output_dir'], dump_xml_name) - if 'txt' in single_result.keys() and single_result['txt'] is not None: - dump_txt_name = os.path.basename(input_data['img_path']).split('.')[0] + '_' + str(data_idx) + '_' + str(i) + '.txt' - self._dump_txt_result(single_result, input_data['output_dir'], dump_txt_name) - return - - def _dump_img_result(self, single_result, output_dir, img_name): - """ - 本クラスの推論処理結果(画像)をファイルに保存します。 - dumpフラグが有効の場合にのみ実行されます。 - - Parameters - ---------- - single_result : dict - 推論処理の結果を保持する辞書型データ。 - output_dir : str - 推論結果が保存されるディレクトリのパス。 - img_name : str - 入力データの画像ファイル名。 - dumpされる画像ファイルのファイル名は入力のファイル名と同名(複数ある場合は連番を付与)となります。 - """ - pred_img_dir = os.path.join(self.process_dump_dir, 'pred_img') - os.makedirs(pred_img_dir, exist_ok=True) - image_file_path = os.path.join(pred_img_dir, img_name) - dump_image = self._create_result_image(single_result) - try: - cv2.imwrite(image_file_path, dump_image) - except OSError as err: - print("Dump image save error: {0}".format(err)) - raise OSError - - return - - def _dump_xml_result(self, single_result, output_dir, img_name): - """ - 本クラスの推論処理結果(XML)をファイルに保存します。 - dumpフラグが有効の場合にのみ実行されます。 - - Parameters - ---------- - single_result : dict - 推論処理の結果を保持する辞書型データ。 - output_dir : str - 推論結果が保存されるディレクトリのパス。 - img_name : str - 入力データの画像ファイル名。 - dumpされるXMLファイルのファイル名は入力のファイル名とほぼ同名(拡張子の変更、サフィックスや連番の追加のみ)となります。 - """ - xml_dir = os.path.join(self.process_dump_dir, 'xml') - os.makedirs(xml_dir, exist_ok=True) - trum, _ = os.path.splitext(img_name) - xml_path = os.path.join(xml_dir, trum + '.xml') - try: - single_result['xml'].write(xml_path, encoding='utf-8', xml_declaration=True) - except OSError as err: - print("Dump xml save error: {0}".format(err)) - raise OSError - - return - - def _dump_txt_result(self, single_result, output_dir, img_name): - """ - 本クラスの推論処理結果(テキスト)をファイルに保存します。 - dumpフラグが有効の場合にのみ実行されます。 - - Parameters - ---------- - single_result : dict - 推論処理の結果を保持する辞書型データ。 - output_dir : str - 推論結果が保存されるディレクトリのパス。 - img_name : str - 入力データの画像ファイル名。 - dumpされるテキストファイルのファイル名は入力のファイル名とほぼ同名(拡張子の変更、サフィックスや連番の追加のみ)となります。 - """ - txt_dir = os.path.join(self.process_dump_dir, 'txt') - os.makedirs(txt_dir, exist_ok=True) - - trum, _ = os.path.splitext(img_name) - txt_path = os.path.join(txt_dir, trum + '_main.txt') - try: - with open(txt_path, 'w') as f: - f.write(single_result['txt']) - except OSError as err: - print("Dump text save error: {0}".format(err)) - raise OSError - - return - - def _create_result_image(self, single_result): - """ - 推論結果を入力の画像に重畳した画像データを生成します。 - - Parameters - ---------- - single_result : dict - 推論処理の結果を保持する辞書型データ。 - """ - dump_img = None - if 'dump_img' in single_result.keys(): - dump_img = copy.deepcopy(single_result['dump_img']) - else: - dump_img = copy.deepcopy(single_result['img']) - if 'xml' in single_result.keys() and single_result['xml'] is not None: - # draw single inferenceresult on input image - # this should be implemeted in each child class - cv2.putText(dump_img, 'dump' + self.proc_name, (0, 50), - cv2.FONT_HERSHEY_PLAIN, 4, (255, 0, 0), 5, cv2.LINE_AA) - pass - else: - cv2.putText(dump_img, 'dump' + self.proc_name, (0, 50), - cv2.FONT_HERSHEY_PLAIN, 4, (255, 255, 0), 5, cv2.LINE_AA) - return dump_img diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/match_costs/builder.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/match_costs/builder.py deleted file mode 100644 index 6894017d42eb16ee4a8ae3ed660a71cda3ad9940..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/match_costs/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -MATCH_COST = Registry('Match Cost') - - -def build_match_cost(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, MATCH_COST, default_args) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/detr_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/detr_head.py deleted file mode 100644 index 8f86e97e7e4dc5dd455ee89bce9eed3b94bd82e8..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/detr_head.py +++ /dev/null @@ -1,682 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, Linear, build_activation_layer -from mmcv.cnn.bricks.transformer import FFN, build_positional_encoding -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from mmdet.models.utils import build_transformer -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class DETRHead(AnchorFreeHead): - """Implements the DETR transformer head. - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_classes (int): Number of categories excluding the background. - in_channels (int): Number of channels in the input feature map. - num_query (int): Number of query in Transformer. - num_reg_fcs (int, optional): Number of fully-connected layers used in - `FFN`, which is then used for the regression head. Default 2. - transformer (obj:`mmcv.ConfigDict`|dict): Config for transformer. - Default: None. - sync_cls_avg_factor (bool): Whether to sync the avg_factor of - all ranks. Default to False. - positional_encoding (obj:`mmcv.ConfigDict`|dict): - Config for position encoding. - loss_cls (obj:`mmcv.ConfigDict`|dict): Config of the - classification loss. Default `CrossEntropyLoss`. - loss_bbox (obj:`mmcv.ConfigDict`|dict): Config of the - regression loss. Default `L1Loss`. - loss_iou (obj:`mmcv.ConfigDict`|dict): Config of the - regression iou loss. Default `GIoULoss`. - tran_cfg (obj:`mmcv.ConfigDict`|dict): Training config of - transformer head. - test_cfg (obj:`mmcv.ConfigDict`|dict): Testing config of - transformer head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - _version = 2 - - def __init__(self, - num_classes, - in_channels, - num_query=100, - num_reg_fcs=2, - transformer=None, - sync_cls_avg_factor=False, - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0), - iou_cost=dict( - type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100), - init_cfg=None, - **kwargs): - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since it brings inconvenience when the initialization of - # `AnchorFreeHead` is called. - super(AnchorFreeHead, self).__init__(init_cfg) - self.bg_cls_weight = 0 - self.sync_cls_avg_factor = sync_cls_avg_factor - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None and (self.__class__ is DETRHead): - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR rep0, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided '\ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \ - 'The classification weight for loss and matcher should be' \ - 'exactly the same.' - assert loss_bbox['loss_weight'] == assigner['reg_cost'][ - 'weight'], 'The regression L1 weight for loss and matcher ' \ - 'should be exactly the same.' - assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \ - 'The regression iou weight for loss and matcher should be' \ - 'exactly the same.' - self.assigner = build_assigner(assigner) - # DETR sampling=False, so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.num_query = num_query - self.num_classes = num_classes - self.in_channels = in_channels - self.num_reg_fcs = num_reg_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.fp16_enabled = False - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_iou = build_loss(loss_iou) - - if self.loss_cls.use_sigmoid: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - self.act_cfg = transformer.get('act_cfg', - dict(type='ReLU', inplace=True)) - self.activate = build_activation_layer(self.act_cfg) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.transformer = build_transformer(transformer) - self.embed_dims = self.transformer.embed_dims - assert 'num_feats' in positional_encoding - num_feats = positional_encoding['num_feats'] - assert num_feats * 2 == self.embed_dims, 'embed_dims should' \ - f' be exactly 2 times of num_feats. Found {self.embed_dims}' \ - f' and {num_feats}.' - self._init_layers() - - def _init_layers(self): - """Initialize layers of the transformer head.""" - self.input_proj = Conv2d( - self.in_channels, self.embed_dims, kernel_size=1) - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_reg_fcs, - self.act_cfg, - dropout=0.0, - add_residual=False) - self.fc_reg = Linear(self.embed_dims, 4) - self.query_embedding = nn.Embedding(self.num_query, self.embed_dims) - - def init_weights(self): - """Initialize weights of the transformer head.""" - # The initialization for transformer is important - self.transformer.init_weights() - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """load checkpoints.""" - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since `AnchorFreeHead._load_from_state_dict` should not be - # called here. Invoking the default `Module._load_from_state_dict` - # is enough. - - # Names of some parameters in has been changed. - version = local_metadata.get('version', None) - if (version is None or version < 2) and self.__class__ is DETRHead: - convert_dict = { - '.self_attn.': '.attentions.0.', - '.ffn.': '.ffns.0.', - '.multihead_attn.': '.attentions.1.', - '.decoder.norm.': '.decoder.post_norm.' - } - for k in state_dict.keys(): - for ori_key, convert_key in convert_dict.items(): - if ori_key in k: - convert_key = k.replace(ori_key, convert_key) - state_dict[convert_key] = state_dict[k] - del state_dict[k] - - super(AnchorFreeHead, - self)._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, - unexpected_keys, error_msgs) - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single, feats, img_metas_list) - - def forward_single(self, x, img_metas): - """"Forward function for a single feature level. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # construct binary masks which used for the transformer. - # NOTE following the official DETR repo, non-zero values representing - # ignored positions, while zero values means valid positions. - batch_size = x.size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - masks = x.new_ones((batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - masks[img_id, :img_h, :img_w] = 0 - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - # position encoding - pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w] - # outs_dec: [nb_dec, bs, num_query, embed_dim] - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores_list, - all_bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # NOTE defaultly only the outputs from the last feature scale is used. - all_cls_scores = all_cls_scores_list[-1] - all_bbox_preds = all_bbox_preds_list[-1] - assert gt_bboxes_ignore is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, - cls_scores, - bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images. Shape [bs, num_query, cls_out_channels]. - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape [bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components for outputs from - a single decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, - img_metas, gt_bboxes_ignore_list) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - if self.sync_cls_avg_factor: - cls_avg_factor = reduce_mean( - cls_scores.new_tensor([cls_avg_factor])) - cls_avg_factor = max(cls_avg_factor, 1) - - cls_avg_factor = max(cls_avg_factor, 1) - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes accross all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(img_metas, bbox_preds): - img_h, img_w, _ = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image with shape [num_query, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all \ - images. - - bbox_targets_list (list[Tensor]): BBox targets for all \ - images. - - bbox_weights_list (list[Tensor]): BBox weights for all \ - images. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - assert gt_bboxes_ignore_list is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - num_imgs = len(cls_scores_list) - gt_bboxes_ignore_list = [ - gt_bboxes_ignore_list for _ in range(num_imgs) - ] - - (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None): - """"Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth bboxes for one image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth class indices for one image - with shape (num_gts, ). - img_meta (dict): Meta information for one image. - gt_bboxes_ignore (Tensor, optional): Bounding boxes - which can be ignored. Default None. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - - num_bboxes = bbox_pred.size(0) - # assigner and sampler - assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes, - gt_labels, img_meta, - gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, bbox_pred, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - img_h, img_w, _ = img_meta['img_shape'] - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - # over-write because img_metas are needed as inputs for bbox_head. - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """Forward function for training mode. - - Args: - x (list[Tensor]): Features from backbone. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert proposal_cfg is None, '"proposal_cfg" must be None' - outs = self(x, img_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores_list, - all_bbox_preds_list, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - # NOTE defaultly only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - - return result_list - - def _get_bboxes_single(self, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False): - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_query, 4]. - img_shape (tuple[int]): Shape of input image, (height, width, 3). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - rescale (bool, optional): If True, return boxes in original image - space. Default False. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. - - - det_bboxes: Predicted bboxes with shape [num_query, 5], \ - where the first 4 columns are bounding box positions \ - (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \ - between 0 and 1. - - det_labels: Predicted labels of the corresponding box with \ - shape [num_query]. - """ - assert len(cls_score) == len(bbox_pred) - max_per_img = self.test_cfg.get('max_per_img', self.num_query) - # exclude background - if self.loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - scores, indexs = cls_score.view(-1).topk(max_per_img) - det_labels = indexs % self.num_classes - bbox_index = indexs // self.num_classes - bbox_pred = bbox_pred[bbox_index] - else: - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img) - bbox_pred = bbox_pred[bbox_index] - det_labels = det_labels[bbox_index] - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1) - - return det_bboxes, det_labels diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/ssd_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/ssd_head.py deleted file mode 100644 index 1e2151afd32849c1e7cd13ab38351ef6a6d670b6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/ssd_head.py +++ /dev/null @@ -1,264 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.runner import ModuleList, force_fp32 - -from mmdet.core import (build_anchor_generator, build_assigner, - build_bbox_coder, build_sampler, multi_apply) -from ..builder import HEADS -from ..losses import smooth_l1_loss -from .anchor_head import AnchorHead - - -# TODO: add loss evaluator for SSD -@HEADS.register_module() -class SSDHead(AnchorHead): - """SSD head used in https://arxiv.org/abs/1512.02325. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - init_cfg (dict or list[dict], optional): Initialization config dict. - """ # noqa: W605 - - def __init__(self, - num_classes=80, - in_channels=(512, 1024, 512, 256, 256, 256), - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=300, - strides=[8, 16, 32, 64, 100, 300], - ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]), - basesize_ratio_range=(0.1, 0.9)), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - reg_decoded_bbox=False, - train_cfg=None, - test_cfg=None, - init_cfg=dict( - type='Xavier', - layer='Conv2d', - distribution='uniform', - bias=0)): - super(AnchorHead, self).__init__(init_cfg) - self.num_classes = num_classes - self.in_channels = in_channels - self.cls_out_channels = num_classes + 1 # add background class - self.anchor_generator = build_anchor_generator(anchor_generator) - num_anchors = self.anchor_generator.num_base_anchors - - reg_convs = [] - cls_convs = [] - for i in range(len(in_channels)): - reg_convs.append( - nn.Conv2d( - in_channels[i], - num_anchors[i] * 4, - kernel_size=3, - padding=1)) - cls_convs.append( - nn.Conv2d( - in_channels[i], - num_anchors[i] * (num_classes + 1), - kernel_size=3, - padding=1)) - self.reg_convs = ModuleList(reg_convs) - self.cls_convs = ModuleList(cls_convs) - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.reg_decoded_bbox = reg_decoded_bbox - self.use_sigmoid_cls = False - self.cls_focal_loss = False - self.train_cfg = train_cfg - self.test_cfg = test_cfg - # set sampling=False for archor_target - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - cls_scores = [] - bbox_preds = [] - for feat, reg_conv, cls_conv in zip(feats, self.reg_convs, - self.cls_convs): - cls_scores.append(cls_conv(feat)) - bbox_preds.append(reg_conv(feat)) - return cls_scores, bbox_preds - - def loss_single(self, cls_score, bbox_pred, anchor, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single image. - - Args: - cls_score (Tensor): Box scores for eachimage - Has shape (num_total_anchors, num_classes). - bbox_pred (Tensor): Box energies / deltas for each image - level with shape (num_total_anchors, 4). - anchors (Tensor): Box reference for each scale level with shape - (num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (num_total_anchors,). - label_weights (Tensor): Label weights of each anchor with shape - (num_total_anchors,) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - loss_cls_all = F.cross_entropy( - cls_score, labels, reduction='none') * label_weights - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((labels >= 0) & - (labels < self.num_classes)).nonzero().reshape(-1) - neg_inds = (labels == self.num_classes).nonzero().view(-1) - - num_pos_samples = pos_inds.size(0) - num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples - if num_neg_samples > neg_inds.size(0): - num_neg_samples = neg_inds.size(0) - topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples) - loss_cls_pos = loss_cls_all[pos_inds].sum() - loss_cls_neg = topk_loss_cls_neg.sum() - loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples - - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - bbox_pred = self.bbox_coder.decode(anchor, bbox_pred) - - loss_bbox = smooth_l1_loss( - bbox_pred, - bbox_targets, - bbox_weights, - beta=self.train_cfg.smoothl1_beta, - avg_factor=num_total_samples) - return loss_cls[None], loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=1, - unmap_outputs=False) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - - num_images = len(img_metas) - all_cls_scores = torch.cat([ - s.permute(0, 2, 3, 1).reshape( - num_images, -1, self.cls_out_channels) for s in cls_scores - ], 1) - all_labels = torch.cat(labels_list, -1).view(num_images, -1) - all_label_weights = torch.cat(label_weights_list, - -1).view(num_images, -1) - all_bbox_preds = torch.cat([ - b.permute(0, 2, 3, 1).reshape(num_images, -1, 4) - for b in bbox_preds - ], -2) - all_bbox_targets = torch.cat(bbox_targets_list, - -2).view(num_images, -1, 4) - all_bbox_weights = torch.cat(bbox_weights_list, - -2).view(num_images, -1, 4) - - # concat all level anchors to a single tensor - all_anchors = [] - for i in range(num_images): - all_anchors.append(torch.cat(anchor_list[i])) - - # check NaN and Inf - assert torch.isfinite(all_cls_scores).all().item(), \ - 'classification scores become infinite or NaN!' - assert torch.isfinite(all_bbox_preds).all().item(), \ - 'bbox predications become infinite or NaN!' - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - all_cls_scores, - all_bbox_preds, - all_anchors, - all_labels, - all_label_weights, - all_bbox_targets, - all_bbox_weights, - num_total_samples=num_total_pos) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) diff --git a/spaces/ucanbaklava/stablediffusionapi-disney-pixar-cartoon/README.md b/spaces/ucanbaklava/stablediffusionapi-disney-pixar-cartoon/README.md deleted file mode 100644 index e61356b3fe5efb110589cce0168916a9c600dff9..0000000000000000000000000000000000000000 --- a/spaces/ucanbaklava/stablediffusionapi-disney-pixar-cartoon/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stablediffusionapi Disney Pixar Cartoon -emoji: 🚀 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ucinlp/autoprompt/autoprompt/run_linear_probe.py b/spaces/ucinlp/autoprompt/autoprompt/run_linear_probe.py deleted file mode 100644 index 8cc3bedaa6710bb373ca8a1840729090296a40a2..0000000000000000000000000000000000000000 --- a/spaces/ucinlp/autoprompt/autoprompt/run_linear_probe.py +++ /dev/null @@ -1,151 +0,0 @@ -""" -Script for running a linear probe on glue tasks. - -Largely copied from: - https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py -""" -import argparse -import logging -from pathlib import Path - -import torch -import torch.nn.functional as F -from torch.utils.data import DataLoader -from transformers import AutoConfig, AutoTokenizer, WEIGHTS_NAME, CONFIG_NAME -from tqdm import tqdm - -from autoprompt.popsicle import AutoPopsicle -import autoprompt.utils as utils - -logger = logging.getLogger(__name__) - - -def main(args): - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - config = AutoConfig.from_pretrained(args.model_name, num_labels=args.num_labels) - tokenizer = AutoTokenizer.from_pretrained(args.model_name) - model = AutoPopsicle.from_pretrained(args.model_name, config=config) - model.to(device) - - collator = utils.Collator(pad_token_id=tokenizer.pad_token_id) - train_dataset, label_map = utils.load_classification_dataset( - args.train, - tokenizer, - args.field_a, - args.field_b, - args.label_field - ) - train_loader = DataLoader(train_dataset, batch_size=args.bsz, shuffle=True, collate_fn=collator) - dev_dataset, _ = utils.load_classification_dataset( - args.dev, - tokenizer, - args.field_a, - args.field_b, - args.label_field, - label_map - ) - dev_loader = DataLoader(dev_dataset, batch_size=args.bsz, shuffle=True, collate_fn=collator) - test_dataset, _ = utils.load_classification_dataset( - args.test, - tokenizer, - args.field_a, - args.field_b, - args.label_field, - label_map - ) - test_loader = DataLoader(test_dataset, batch_size=args.bsz, shuffle=True, collate_fn=collator) - optimizer = torch.optim.Adam(model.classifier.parameters(), lr=args.lr, weight_decay=1e-6) - - if not args.ckpt_dir.exists(): - # logger.info(f'Making checkpoint directory: {args.ckpt_dir}') - args.ckpt_dir.mkdir(parents=True) - elif not args.force_overwrite: - raise RuntimeError('Checkpoint directory already exists.') - - best_accuracy = 0 - try: - for epoch in range(args.epochs): - logger.info('Training...') - model.eval() # Just linear regression - don't want model outputs changing during training. - avg_loss = utils.ExponentialMovingAverage() - pbar = tqdm(train_loader) - for model_inputs, labels in pbar: - model_inputs = {k: v.to(device) for k, v in model_inputs.items()} - labels = labels.to(device) - optimizer.zero_grad() - logits, *_ = model(**model_inputs) - loss = F.cross_entropy(logits, labels.squeeze(-1)) - loss.backward() - optimizer.step() - avg_loss.update(loss.item()) - pbar.set_description(f'loss: {avg_loss.get_metric(): 0.4f}') - - logger.info('Evaluating...') - model.eval() - correct = 0 - total = 0 - for model_inputs, labels in dev_loader: - model_inputs = {k: v.to(device) for k, v in model_inputs.items()} - labels = labels.to(device) - logits, *_ = model(**model_inputs) - _, preds = logits.max(dim=-1) - correct += (preds == labels.squeeze(-1)).sum().item() - total += labels.size(0) - accuracy = correct / (total + 1e-13) - logger.info(f'Accuracy: {accuracy : 0.4f}') - - if accuracy > best_accuracy: - logger.info('Best performance so far. Saving...') - # torch.save(model.state_dict(), args.ckpt_dir / WEIGHTS_NAME) - # model.config.to_json_file(args.ckpt_dir / CONFIG_NAME) - model.save_pretrained(args.ckpt_dir) - tokenizer.save_pretrained(args.ckpt_dir) - best_accuracy = accuracy - - except KeyboardInterrupt: - logger.info('Training manually terminated.') - - logger.info('Testing...') - checkpoint = torch.load(args.ckpt_dir / WEIGHTS_NAME) - model.load_state_dict(checkpoint) - model.eval() - correct = 0 - total = 0 - for model_inputs, labels in test_loader: - model_inputs = {k: v.to(device) for k, v in model_inputs.items()} - labels = labels.to(device) - logits, *_ = model(**model_inputs) - _, preds = logits.max(dim=-1) - correct += (preds == labels.squeeze(-1)).sum().item() - total += labels.size(0) - accuracy = correct / (total + 1e-13) - logger.info(f'Accuracy: {accuracy : 0.4f}') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--model-name', type=str) - parser.add_argument('--train', type=Path) - parser.add_argument('--dev', type=Path) - parser.add_argument('--test', type=Path) - parser.add_argument('--field-a', type=str) - parser.add_argument('--field-b', type=str, default=None) - parser.add_argument('--label-field', type=str, default='label') - parser.add_argument('--ckpt-dir', type=Path, default=Path('ckpt/')) - parser.add_argument('--num-labels', type=int, default=2) - parser.add_argument('--bsz', type=int, default=32) - parser.add_argument('--epochs', type=int, default=10) - parser.add_argument('--lr', type=float, default=1e-3) - parser.add_argument('-f', '--force-overwrite', action='store_true', default=True) - parser.add_argument('--debug', action='store_true') - parser.add_argument('--log_file', type=str, default='log.txt') - args = parser.parse_args() - - if args.debug: - level = logging.DEBUG - else: - level = logging.INFO - logging.basicConfig(level=level, filename=args.log_file) - - main(args) diff --git a/spaces/ulysses115/diffsvc_test/modules/fastspeech/pe.py b/spaces/ulysses115/diffsvc_test/modules/fastspeech/pe.py deleted file mode 100644 index da0d46e3446bbf45d8ee3682edcaf0d8d64dcdfb..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/modules/fastspeech/pe.py +++ /dev/null @@ -1,149 +0,0 @@ -from modules.commons.common_layers import * -from utils.hparams import hparams -from modules.fastspeech.tts_modules import PitchPredictor -from utils.pitch_utils import denorm_f0 - - -class Prenet(nn.Module): - def __init__(self, in_dim=80, out_dim=256, kernel=5, n_layers=3, strides=None): - super(Prenet, self).__init__() - padding = kernel // 2 - self.layers = [] - self.strides = strides if strides is not None else [1] * n_layers - for l in range(n_layers): - self.layers.append(nn.Sequential( - nn.Conv1d(in_dim, out_dim, kernel_size=kernel, padding=padding, stride=self.strides[l]), - nn.ReLU(), - nn.BatchNorm1d(out_dim) - )) - in_dim = out_dim - self.layers = nn.ModuleList(self.layers) - self.out_proj = nn.Linear(out_dim, out_dim) - - def forward(self, x): - """ - - :param x: [B, T, 80] - :return: [L, B, T, H], [B, T, H] - """ - # padding_mask = x.abs().sum(-1).eq(0).data # [B, T] - padding_mask = x.abs().sum(-1).eq(0).detach() - nonpadding_mask_TB = 1 - padding_mask.float()[:, None, :] # [B, 1, T] - x = x.transpose(1, 2) - hiddens = [] - for i, l in enumerate(self.layers): - nonpadding_mask_TB = nonpadding_mask_TB[:, :, ::self.strides[i]] - x = l(x) * nonpadding_mask_TB - hiddens.append(x) - hiddens = torch.stack(hiddens, 0) # [L, B, H, T] - hiddens = hiddens.transpose(2, 3) # [L, B, T, H] - x = self.out_proj(x.transpose(1, 2)) # [B, T, H] - x = x * nonpadding_mask_TB.transpose(1, 2) - return hiddens, x - - -class ConvBlock(nn.Module): - def __init__(self, idim=80, n_chans=256, kernel_size=3, stride=1, norm='gn', dropout=0): - super().__init__() - self.conv = ConvNorm(idim, n_chans, kernel_size, stride=stride) - self.norm = norm - if self.norm == 'bn': - self.norm = nn.BatchNorm1d(n_chans) - elif self.norm == 'in': - self.norm = nn.InstanceNorm1d(n_chans, affine=True) - elif self.norm == 'gn': - self.norm = nn.GroupNorm(n_chans // 16, n_chans) - elif self.norm == 'ln': - self.norm = LayerNorm(n_chans // 16, n_chans) - elif self.norm == 'wn': - self.conv = torch.nn.utils.weight_norm(self.conv.conv) - self.dropout = nn.Dropout(dropout) - self.relu = nn.ReLU() - - def forward(self, x): - """ - - :param x: [B, C, T] - :return: [B, C, T] - """ - x = self.conv(x) - if not isinstance(self.norm, str): - if self.norm == 'none': - pass - elif self.norm == 'ln': - x = self.norm(x.transpose(1, 2)).transpose(1, 2) - else: - x = self.norm(x) - x = self.relu(x) - x = self.dropout(x) - return x - - -class ConvStacks(nn.Module): - def __init__(self, idim=80, n_layers=5, n_chans=256, odim=32, kernel_size=5, norm='gn', - dropout=0, strides=None, res=True): - super().__init__() - self.conv = torch.nn.ModuleList() - self.kernel_size = kernel_size - self.res = res - self.in_proj = Linear(idim, n_chans) - if strides is None: - strides = [1] * n_layers - else: - assert len(strides) == n_layers - for idx in range(n_layers): - self.conv.append(ConvBlock( - n_chans, n_chans, kernel_size, stride=strides[idx], norm=norm, dropout=dropout)) - self.out_proj = Linear(n_chans, odim) - - def forward(self, x, return_hiddens=False): - """ - - :param x: [B, T, H] - :return: [B, T, H] - """ - x = self.in_proj(x) - x = x.transpose(1, -1) # (B, idim, Tmax) - hiddens = [] - for f in self.conv: - x_ = f(x) - x = x + x_ if self.res else x_ # (B, C, Tmax) - hiddens.append(x) - x = x.transpose(1, -1) - x = self.out_proj(x) # (B, Tmax, H) - if return_hiddens: - hiddens = torch.stack(hiddens, 1) # [B, L, C, T] - return x, hiddens - return x - - -class PitchExtractor(nn.Module): - def __init__(self, n_mel_bins=80, conv_layers=2): - super().__init__() - self.hidden_size = hparams['hidden_size'] - self.predictor_hidden = hparams['predictor_hidden'] if hparams['predictor_hidden'] > 0 else self.hidden_size - self.conv_layers = conv_layers - - self.mel_prenet = Prenet(n_mel_bins, self.hidden_size, strides=[1, 1, 1]) - if self.conv_layers > 0: - self.mel_encoder = ConvStacks( - idim=self.hidden_size, n_chans=self.hidden_size, odim=self.hidden_size, n_layers=self.conv_layers) - self.pitch_predictor = PitchPredictor( - self.hidden_size, n_chans=self.predictor_hidden, - n_layers=5, dropout_rate=0.1, odim=2, - padding=hparams['ffn_padding'], kernel_size=hparams['predictor_kernel']) - - def forward(self, mel_input=None): - ret = {} - mel_hidden = self.mel_prenet(mel_input)[1] - if self.conv_layers > 0: - mel_hidden = self.mel_encoder(mel_hidden) - - ret['pitch_pred'] = pitch_pred = self.pitch_predictor(mel_hidden) - - pitch_padding = mel_input.abs().sum(-1) == 0 - use_uv = hparams['pitch_type'] == 'frame' #and hparams['use_uv'] - ret['f0_denorm_pred'] = denorm_f0( - pitch_pred[:, :, 0], (pitch_pred[:, :, 1] > 0) if use_uv else None, - hparams, pitch_padding=pitch_padding) - return ret \ No newline at end of file diff --git a/spaces/umichVision/virtex-redcaps/virtex/model_zoo/__init__.py b/spaces/umichVision/virtex-redcaps/virtex/model_zoo/__init__.py deleted file mode 100644 index d3ada912fc95eb52edc418d1dbad7c4392295f30..0000000000000000000000000000000000000000 --- a/spaces/umichVision/virtex-redcaps/virtex/model_zoo/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .model_zoo import get - -__all__ = ["get"] diff --git a/spaces/vaibhavsharda/semantic_clustering/twc_clustering.py b/spaces/vaibhavsharda/semantic_clustering/twc_clustering.py deleted file mode 100644 index 200567399b6901e26a958d8e15afcc7c032efe85..0000000000000000000000000000000000000000 --- a/spaces/vaibhavsharda/semantic_clustering/twc_clustering.py +++ /dev/null @@ -1,177 +0,0 @@ -from scipy.spatial.distance import cosine -import argparse -import json -import pdb -import torch -import torch.nn.functional as F -import numpy as np -import time -from collections import OrderedDict - - -class TWCClustering: - def __init__(self): - print("In Zscore Clustering") - - def compute_matrix(self,embeddings): - #print("Computing similarity matrix ...)") - embeddings= np.array(embeddings) - start = time.time() - vec_a = embeddings.T #vec_a shape (1024,) - vec_a = vec_a/np.linalg.norm(vec_a,axis=0) #Norm is along axis 0 - rows - vec_a = vec_a.T #vec_a shape becomes (,1024) - similarity_matrix = np.inner(vec_a,vec_a) - end = time.time() - time_val = (end-start)*1000 - #print(f"Similarity matrix computation complete. Time taken:{(time_val/(1000*60)):.2f} minutes") - return similarity_matrix - - def get_terms_above_threshold(self,matrix,embeddings,pivot_index,threshold): - run_index = pivot_index - picked_arr = [] - while (run_index < len(embeddings)): - if (matrix[pivot_index][run_index] >= threshold): - picked_arr.append(run_index) - run_index += 1 - return picked_arr - - def update_picked_dict_arr(self,picked_dict,arr): - for i in range(len(arr)): - picked_dict[arr[i]] = 1 - - def update_picked_dict(self,picked_dict,in_dict): - for key in in_dict: - picked_dict[key] = 1 - - def find_pivot_subgraph(self,pivot_index,arr,matrix,threshold,strict_cluster = True): - center_index = pivot_index - center_score = 0 - center_dict = {} - for i in range(len(arr)): - node_i_index = arr[i] - running_score = 0 - temp_dict = {} - for j in range(len(arr)): - node_j_index = arr[j] - cosine_dist = matrix[node_i_index][node_j_index] - if ((cosine_dist < threshold) and strict_cluster): - continue - running_score += cosine_dist - temp_dict[node_j_index] = cosine_dist - if (running_score > center_score): - center_index = node_i_index - center_dict = temp_dict - center_score = running_score - sorted_d = OrderedDict(sorted(center_dict.items(), key=lambda kv: kv[1], reverse=True)) - return {"pivot_index":center_index,"orig_index":pivot_index,"neighs":sorted_d} - - - def update_overlap_stats(self,overlap_dict,cluster_info): - arr = list(cluster_info["neighs"].keys()) - for val in arr: - if (val not in overlap_dict): - overlap_dict[val] = 1 - else: - overlap_dict[val] += 1 - - def bucket_overlap(self,overlap_dict): - bucket_dict = {} - for key in overlap_dict: - if (overlap_dict[key] not in bucket_dict): - bucket_dict[overlap_dict[key]] = 1 - else: - bucket_dict[overlap_dict[key]] += 1 - sorted_d = OrderedDict(sorted(bucket_dict.items(), key=lambda kv: kv[1], reverse=False)) - return sorted_d - - def merge_clusters(self,ref_cluster,curr_cluster): - dup_arr = ref_cluster.copy() - for j in range(len(curr_cluster)): - if (curr_cluster[j] not in dup_arr): - ref_cluster.append(curr_cluster[j]) - - - def non_overlapped_clustering(self,matrix,embeddings,threshold,mean,std,cluster_dict): - picked_dict = {} - overlap_dict = {} - candidates = [] - - for i in range(len(embeddings)): - if (i in picked_dict): - continue - zscore = mean + threshold*std - arr = self.get_terms_above_threshold(matrix,embeddings,i,zscore) - candidates.append(arr) - self.update_picked_dict_arr(picked_dict,arr) - - # Merge arrays to create non-overlapping sets - run_index_i = 0 - while (run_index_i < len(candidates)): - ref_cluster = candidates[run_index_i] - run_index_j = run_index_i + 1 - found = False - while (run_index_j < len(candidates)): - curr_cluster = candidates[run_index_j] - for k in range(len(curr_cluster)): - if (curr_cluster[k] in ref_cluster): - self.merge_clusters(ref_cluster,curr_cluster) - candidates.pop(run_index_j) - found = True - run_index_i = 0 - break - if (found): - break - else: - run_index_j += 1 - if (not found): - run_index_i += 1 - - - zscore = mean + threshold*std - for i in range(len(candidates)): - arr = candidates[i] - cluster_info = self.find_pivot_subgraph(arr[0],arr,matrix,zscore,strict_cluster = False) - cluster_dict["clusters"].append(cluster_info) - return {} - - def overlapped_clustering(self,matrix,embeddings,threshold,mean,std,cluster_dict): - picked_dict = {} - overlap_dict = {} - - zscore = mean + threshold*std - for i in range(len(embeddings)): - if (i in picked_dict): - continue - arr = self.get_terms_above_threshold(matrix,embeddings,i,zscore) - cluster_info = self.find_pivot_subgraph(i,arr,matrix,zscore,strict_cluster = True) - self.update_picked_dict(picked_dict,cluster_info["neighs"]) - self.update_overlap_stats(overlap_dict,cluster_info) - cluster_dict["clusters"].append(cluster_info) - sorted_d = self.bucket_overlap(overlap_dict) - return sorted_d - - - def cluster(self,output_file,texts,embeddings,threshold,clustering_type): - is_overlapped = True if clustering_type == "overlapped" else False - matrix = self.compute_matrix(embeddings) - mean = np.mean(matrix) - std = np.std(matrix) - zscores = [] - inc = 0 - value = mean - while (value < 1): - zscores.append({"threshold":inc,"cosine":round(value,2)}) - inc += 1 - value = mean + inc*std - #print("In clustering:",round(std,2),zscores) - cluster_dict = {} - cluster_dict["clusters"] = [] - if (is_overlapped): - sorted_d = self.overlapped_clustering(matrix,embeddings,threshold,mean,std,cluster_dict) - else: - sorted_d = self.non_overlapped_clustering(matrix,embeddings,threshold,mean,std,cluster_dict) - curr_threshold = f"{threshold} (cosine:{mean+threshold*std:.2f})" - cluster_dict["info"] ={"mean":mean,"std":std,"current_threshold":curr_threshold,"zscores":zscores,"overlap":list(sorted_d.items())} - return cluster_dict - - diff --git a/spaces/vcasadei/banana-defect-detection/app.py b/spaces/vcasadei/banana-defect-detection/app.py deleted file mode 100644 index 3990f8a9bb6e3e4abec500928ea07ceb5a6dca94..0000000000000000000000000000000000000000 --- a/spaces/vcasadei/banana-defect-detection/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import gradio as gr -from huggingface_hub import hf_hub_download -from PIL import Image - -REPO_ID = "vcasadei/yolov5-banana-defect-detection" -FILENAME = "best.pt" - -yolov5_weights = hf_hub_download(repo_id=REPO_ID, filename=FILENAME) - -model = torch.hub.load('ultralytics/yolov5', 'custom', path=yolov5_weights, force_reload=True) # local repo - -def object_detection(im, size=640): - results = model(im) # inference - #results.print() # print results to screen - #results.show() # display results - #results.save() # save as results1.jpg, results2.jpg... etc. - results.render() # updates results.imgs with boxes and labels - return Image.fromarray(results.imgs[0]) - -title = "Identificação de Defeitos em Banana" -description = """Esse modelo é uma pequena demonstração baseada em uma análise de cerca de 60 imagens somente. Para resultados mais confiáveis e genéricos, são necessários mais exemplos (imagens). -""" - -image = gr.inputs.Image(shape=(640, 640), image_mode="RGB", source="upload", label="Imagem", optional=False) -outputs = gr.outputs.Image(type="pil", label="Output Image") - -gr.Interface( - fn=object_detection, - inputs=image, - outputs=outputs, - title=title, - description=description, - examples=[["sample_images/IMG_0125.JPG"], ["sample_images/IMG_0129.JPG"], - ["sample_images/IMG_0157.JPG"], ["sample_images/IMG_0158.JPG"]], -).launch() \ No newline at end of file diff --git a/spaces/vladocar/3dfood/README.md b/spaces/vladocar/3dfood/README.md deleted file mode 100644 index 3fbd91eaee8207ea4983666fe8f9f792d92c0d83..0000000000000000000000000000000000000000 --- a/spaces/vladocar/3dfood/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 3dfood -emoji: 👀 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vorstcavry/VoCh-beta/app.py b/spaces/vorstcavry/VoCh-beta/app.py deleted file mode 100644 index 1a408c09c7c8adb2d98d958dca4be9debf0e7ab5..0000000000000000000000000000000000000000 --- a/spaces/vorstcavry/VoCh-beta/app.py +++ /dev/null @@ -1,186 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 1000 and limitation: - return "Please upload an audio file that is less than 1000 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
            Voice Changer\n" - "##
            Masukan Vokal tanpa instrument/tanpa musik.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=vorstcavry.VoCh)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
            ' - f'
            {title}
            \n'+ - (f'
            Model author: {author}
            ' if author else "")+ - (f'' if cover else "")+ - '
            ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 1000 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_oas3_api_svc.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_oas3_api_svc.py deleted file mode 100644 index 5c23f6566cce23a42f1b7c9ef02c4720dd7b1a4d..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_oas3_api_svc.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/17 -@Author : mashenquan -@File : metagpt_oas3_api_svc.py -@Desc : MetaGPT OpenAPI Specification 3.0 REST API service -""" -import asyncio -from pathlib import Path -import sys - -import connexion - -sys.path.append(str(Path(__file__).resolve().parent.parent.parent)) # fix-bug: No module named 'metagpt' - - -def oas_http_svc(): - """Start the OAS 3.0 OpenAPI HTTP service""" - app = connexion.AioHttpApp(__name__, specification_dir='../../.well-known/') - app.add_api("metagpt_oas3_api.yaml") - app.add_api("openapi.yaml") - app.run(port=8080) - - -async def async_main(): - """Start the OAS 3.0 OpenAPI HTTP service in the background.""" - loop = asyncio.get_event_loop() - loop.run_in_executor(None, oas_http_svc) - - # TODO: replace following codes: - while True: - await asyncio.sleep(1) - print("sleep") - - -def main(): - oas_http_svc() - - -if __name__ == "__main__": - # asyncio.run(async_main()) - main() diff --git a/spaces/windoge/anime-ai-detect/README.md b/spaces/windoge/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/windoge/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wong26/faster-whisper-webui/src/whisper/whisperFactory.py b/spaces/wong26/faster-whisper-webui/src/whisper/whisperFactory.py deleted file mode 100644 index 58fc840b7e60947fec4a98b2833ff03e7ad7b7de..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/src/whisper/whisperFactory.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import List -from src import modelCache -from src.config import ModelConfig -from src.whisper.abstractWhisperContainer import AbstractWhisperContainer - -def create_whisper_container(whisper_implementation: str, - model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: modelCache = None, models: List[ModelConfig] = []) -> AbstractWhisperContainer: - print("Creating whisper container for " + whisper_implementation) - - if (whisper_implementation == "whisper"): - from src.whisper.whisperContainer import WhisperContainer - return WhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models) - elif (whisper_implementation == "faster-whisper" or whisper_implementation == "faster_whisper"): - from src.whisper.fasterWhisperContainer import FasterWhisperContainer - return FasterWhisperContainer(model_name=model_name, device=device, compute_type=compute_type, download_root=download_root, cache=cache, models=models) - else: - raise ValueError("Unknown Whisper implementation: " + whisper_implementation) \ No newline at end of file diff --git a/spaces/wootang03/text_generator/README.md b/spaces/wootang03/text_generator/README.md deleted file mode 100644 index a67579dface6818d4f9f8f1e994c6cfa91bc7713..0000000000000000000000000000000000000000 --- a/spaces/wootang03/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 🚀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wuhao2222/WarriorMama777-OrangeMixs/README.md b/spaces/wuhao2222/WarriorMama777-OrangeMixs/README.md deleted file mode 100644 index 28dc37a80061e5f98653db21d35abb1907cb80c2..0000000000000000000000000000000000000000 --- a/spaces/wuhao2222/WarriorMama777-OrangeMixs/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WarriorMama777 OrangeMixs -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wy213/yangAI/Dockerfile b/spaces/wy213/yangAI/Dockerfile deleted file mode 100644 index b5bdb0bbc15ae70ff9fd0c21d8205f0a6c237961..0000000000000000000000000000000000000000 --- a/spaces/wy213/yangAI/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="ksJ8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/xdecoder/Instruct-X-Decoder/tasks/open_sem.py b/spaces/xdecoder/Instruct-X-Decoder/tasks/open_sem.py deleted file mode 100644 index 04b95fc9fff82951cf6683a5a2f0632bf30837e4..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/tasks/open_sem.py +++ /dev/null @@ -1,57 +0,0 @@ -# -------------------------------------------------------- -# X-Decoder -- Generalized Decoding for Pixel, Image, and Language -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Xueyan Zou (xueyan@cs.wisc.edu) -# -------------------------------------------------------- - -import os -import cv2 -import torch -import numpy as np -from PIL import Image -from torchvision import transforms -from utils.visualizer import Visualizer -from detectron2.utils.colormap import random_color -from detectron2.data import MetadataCatalog - - -t = [] -t.append(transforms.Resize(512, interpolation=Image.BICUBIC)) -transform = transforms.Compose(t) -metadata = MetadataCatalog.get('ade20k_panoptic_train') - -def open_semseg(model, image, texts, inpainting_text, *args, **kwargs): - stuff_classes = [x.strip() for x in texts.split(',')] - stuff_colors = [random_color(rgb=True, maximum=255).astype(np.int32).tolist() for _ in range(len(stuff_classes))] - stuff_dataset_id_to_contiguous_id = {x:x for x in range(len(stuff_classes))} - - MetadataCatalog.get("demo").set( - stuff_colors=stuff_colors, - stuff_classes=stuff_classes, - stuff_dataset_id_to_contiguous_id=stuff_dataset_id_to_contiguous_id, - ) - model.model.sem_seg_head.predictor.lang_encoder.get_text_embeddings(stuff_classes + ["background"], is_eval=True) - metadata = MetadataCatalog.get('demo') - model.model.metadata = metadata - model.model.sem_seg_head.num_classes = len(stuff_classes) - - with torch.no_grad(): - image_ori = transform(image) - width = image_ori.size[0] - height = image_ori.size[1] - image = transform(image_ori) - image = np.asarray(image) - images = torch.from_numpy(image.copy()).permute(2,0,1).cuda() - - batch_inputs = [{'image': images, 'height': height, 'width': width}] - outputs = model.forward(batch_inputs) - visual = Visualizer(image_ori, metadata=metadata) - - sem_seg = outputs[-1]['sem_seg'].max(0)[1] - demo = visual.draw_sem_seg(sem_seg.cpu(), alpha=0.5) # rgb Image - res = demo.get_image() - - MetadataCatalog.remove('demo') - torch.cuda.empty_cache() - return Image.fromarray(res), '', None \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/yolov5/utils/segment/general.py b/spaces/xfys/yolov5_tracking/yolov5/utils/segment/general.py deleted file mode 100644 index f1b2f1dd120ff47eec618e0c25239c28c4d88475..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/utils/segment/general.py +++ /dev/null @@ -1,160 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn.functional as F - - -def crop_mask(masks, boxes): - """ - "Crop" predicted masks by zeroing out everything not in the predicted bbox. - Vectorized by Chong (thanks Chong). - - Args: - - masks should be a size [n, h, w] tensor of masks - - boxes should be a size [n, 4] tensor of bbox coords in relative point form - """ - - n, h, w = masks.shape - x1, y1, x2, y2 = torch.chunk(boxes[:, :, None], 4, 1) # x1 shape(1,1,n) - r = torch.arange(w, device=masks.device, dtype=x1.dtype)[None, None, :] # rows shape(1,w,1) - c = torch.arange(h, device=masks.device, dtype=x1.dtype)[None, :, None] # cols shape(h,1,1) - - return masks * ((r >= x1) * (r < x2) * (c >= y1) * (c < y2)) - - -def process_mask_upsample(protos, masks_in, bboxes, shape): - """ - Crop after upsample. - protos: [mask_dim, mask_h, mask_w] - masks_in: [n, mask_dim], n is number of masks after nms - bboxes: [n, 4], n is number of masks after nms - shape: input_image_size, (h, w) - - return: h, w, n - """ - - c, mh, mw = protos.shape # CHW - masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) - masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW - masks = crop_mask(masks, bboxes) # CHW - return masks.gt_(0.5) - - -def process_mask(protos, masks_in, bboxes, shape, upsample=False): - """ - Crop before upsample. - proto_out: [mask_dim, mask_h, mask_w] - out_masks: [n, mask_dim], n is number of masks after nms - bboxes: [n, 4], n is number of masks after nms - shape:input_image_size, (h, w) - - return: h, w, n - """ - - c, mh, mw = protos.shape # CHW - ih, iw = shape - masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) # CHW - - downsampled_bboxes = bboxes.clone() - downsampled_bboxes[:, 0] *= mw / iw - downsampled_bboxes[:, 2] *= mw / iw - downsampled_bboxes[:, 3] *= mh / ih - downsampled_bboxes[:, 1] *= mh / ih - - masks = crop_mask(masks, downsampled_bboxes) # CHW - if upsample: - masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW - return masks.gt_(0.5) - - -def process_mask_native(protos, masks_in, bboxes, shape): - """ - Crop after upsample. - protos: [mask_dim, mask_h, mask_w] - masks_in: [n, mask_dim], n is number of masks after nms - bboxes: [n, 4], n is number of masks after nms - shape: input_image_size, (h, w) - - return: h, w, n - """ - c, mh, mw = protos.shape # CHW - masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) - gain = min(mh / shape[0], mw / shape[1]) # gain = old / new - pad = (mw - shape[1] * gain) / 2, (mh - shape[0] * gain) / 2 # wh padding - top, left = int(pad[1]), int(pad[0]) # y, x - bottom, right = int(mh - pad[1]), int(mw - pad[0]) - masks = masks[:, top:bottom, left:right] - - masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW - masks = crop_mask(masks, bboxes) # CHW - return masks.gt_(0.5) - - -def scale_image(im1_shape, masks, im0_shape, ratio_pad=None): - """ - img1_shape: model input shape, [h, w] - img0_shape: origin pic shape, [h, w, 3] - masks: [h, w, num] - """ - # Rescale coordinates (xyxy) from im1_shape to im0_shape - if ratio_pad is None: # calculate from im0_shape - gain = min(im1_shape[0] / im0_shape[0], im1_shape[1] / im0_shape[1]) # gain = old / new - pad = (im1_shape[1] - im0_shape[1] * gain) / 2, (im1_shape[0] - im0_shape[0] * gain) / 2 # wh padding - else: - pad = ratio_pad[1] - top, left = int(pad[1]), int(pad[0]) # y, x - bottom, right = int(im1_shape[0] - pad[1]), int(im1_shape[1] - pad[0]) - - if len(masks.shape) < 2: - raise ValueError(f'"len of masks shape" should be 2 or 3, but got {len(masks.shape)}') - masks = masks[top:bottom, left:right] - # masks = masks.permute(2, 0, 1).contiguous() - # masks = F.interpolate(masks[None], im0_shape[:2], mode='bilinear', align_corners=False)[0] - # masks = masks.permute(1, 2, 0).contiguous() - masks = cv2.resize(masks, (im0_shape[1], im0_shape[0])) - - if len(masks.shape) == 2: - masks = masks[:, :, None] - return masks - - -def mask_iou(mask1, mask2, eps=1e-7): - """ - mask1: [N, n] m1 means number of predicted objects - mask2: [M, n] m2 means number of gt objects - Note: n means image_w x image_h - - return: masks iou, [N, M] - """ - intersection = torch.matmul(mask1, mask2.t()).clamp(0) - union = (mask1.sum(1)[:, None] + mask2.sum(1)[None]) - intersection # (area1 + area2) - intersection - return intersection / (union + eps) - - -def masks_iou(mask1, mask2, eps=1e-7): - """ - mask1: [N, n] m1 means number of predicted objects - mask2: [N, n] m2 means number of gt objects - Note: n means image_w x image_h - - return: masks iou, (N, ) - """ - intersection = (mask1 * mask2).sum(1).clamp(0) # (N, ) - union = (mask1.sum(1) + mask2.sum(1))[None] - intersection # (area1 + area2) - intersection - return intersection / (union + eps) - - -def masks2segments(masks, strategy='largest'): - # Convert masks(n,160,160) into segments(n,xy) - segments = [] - for x in masks.int().cpu().numpy().astype('uint8'): - c = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0] - if c: - if strategy == 'concat': # concatenate all segments - c = np.concatenate([x.reshape(-1, 2) for x in c]) - elif strategy == 'largest': # select largest segment - c = np.array(c[np.array([len(x) for x in c]).argmax()]).reshape(-1, 2) - else: - c = np.zeros((0, 2)) # no segments found - segments.append(c.astype('float32')) - return segments diff --git a/spaces/xiaohuolong/ChuanhuChatGPT/ChuanhuChatbot.py b/spaces/xiaohuolong/ChuanhuChatGPT/ChuanhuChatbot.py deleted file mode 100644 index e3ecec4e0cc243b321149211bc8f960938e9ad6d..0000000000000000000000000000000000000000 --- a/spaces/xiaohuolong/ChuanhuChatGPT/ChuanhuChatbot.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr -# import openai -import os -import sys -import argparse -from utils import * -from presets import * - - -my_api_key = "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" # 在这里输入你的 API 密钥 - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess - -with gr.Blocks(css=customCSS) as demo: - gr.HTML(title) - with gr.Row(): - keyTxt = gr.Textbox(show_label=False, placeholder=f"在这里输入你的OpenAI API-key...", - value=my_api_key, type="password", visible=not HIDE_MY_KEY).style(container=True) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除最近一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - status_display = gr.Markdown("status: ready") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - delLastBtn.click(delete_last_conversation, [chatbot, history, token_count, use_streaming_checkbox], [ - chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - - templateApplyBtn.click(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -print("川虎的温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - #if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - #if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/xswu/HPSv2/src/open_clip/utils.py b/spaces/xswu/HPSv2/src/open_clip/utils.py deleted file mode 100644 index 51e80c5e296b24cae130ab0459baf268e0db7673..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/open_clip/utils.py +++ /dev/null @@ -1,60 +0,0 @@ -from itertools import repeat -import collections.abc - -from torch import nn as nn -from torchvision.ops.misc import FrozenBatchNorm2d - - -def freeze_batch_norm_2d(module, module_match={}, name=''): - """ - Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is - itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and - returned. Otherwise, the module is walked recursively and submodules are converted in place. - - Args: - module (torch.nn.Module): Any PyTorch module. - module_match (dict): Dictionary of full module names to freeze (all if empty) - name (str): Full module name (prefix) - - Returns: - torch.nn.Module: Resulting module - - Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 - """ - res = module - is_match = True - if module_match: - is_match = name in module_match - if is_match and isinstance(module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm)): - res = FrozenBatchNorm2d(module.num_features) - res.num_features = module.num_features - res.affine = module.affine - if module.affine: - res.weight.data = module.weight.data.clone().detach() - res.bias.data = module.bias.data.clone().detach() - res.running_mean.data = module.running_mean.data - res.running_var.data = module.running_var.data - res.eps = module.eps - else: - for child_name, child in module.named_children(): - full_child_name = '.'.join([name, child_name]) if name else child_name - new_child = freeze_batch_norm_2d(child, module_match, full_child_name) - if new_child is not child: - res.add_module(child_name, new_child) - return res - - -# From PyTorch internals -def _ntuple(n): - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = lambda n, x: _ntuple(n)(x) diff --git a/spaces/xuetao/bingo3/src/components/voice.tsx b/spaces/xuetao/bingo3/src/components/voice.tsx deleted file mode 100644 index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/xwsm/gpt/colorful.py b/spaces/xwsm/gpt/colorful.py deleted file mode 100644 index d90972bb30a8f8fb932abbc34232e474df4d5205..0000000000000000000000000000000000000000 --- a/spaces/xwsm/gpt/colorful.py +++ /dev/null @@ -1,91 +0,0 @@ -import platform -from sys import stdout - -if platform.system()=="Linux": - pass -else: - from colorama import init - init() - -# Do you like the elegance of Chinese characters? -def print红(*kw,**kargs): - print("\033[0;31m",*kw,"\033[0m",**kargs) -def print绿(*kw,**kargs): - print("\033[0;32m",*kw,"\033[0m",**kargs) -def print黄(*kw,**kargs): - print("\033[0;33m",*kw,"\033[0m",**kargs) -def print蓝(*kw,**kargs): - print("\033[0;34m",*kw,"\033[0m",**kargs) -def print紫(*kw,**kargs): - print("\033[0;35m",*kw,"\033[0m",**kargs) -def print靛(*kw,**kargs): - print("\033[0;36m",*kw,"\033[0m",**kargs) - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - - - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - -print_red = print红 -print_green = print绿 -print_yellow = print黄 -print_blue = print蓝 -print_purple = print紫 -print_indigo = print靛 - -print_bold_red = print亮红 -print_bold_green = print亮绿 -print_bold_yellow = print亮黄 -print_bold_blue = print亮蓝 -print_bold_purple = print亮紫 -print_bold_indigo = print亮靛 - -if not stdout.isatty(): - # redirection, avoid a fucked up log file - print红 = print - print绿 = print - print黄 = print - print蓝 = print - print紫 = print - print靛 = print - print亮红 = print - print亮绿 = print - print亮黄 = print - print亮蓝 = print - print亮紫 = print - print亮靛 = print - print_red = print - print_green = print - print_yellow = print - print_blue = print - print_purple = print - print_indigo = print - print_bold_red = print - print_bold_green = print - print_bold_yellow = print - print_bold_blue = print - print_bold_purple = print - print_bold_indigo = print \ No newline at end of file diff --git a/spaces/xwsm/gpt/request_llm/bridge_stackclaude.py b/spaces/xwsm/gpt/request_llm/bridge_stackclaude.py deleted file mode 100644 index f9f3e843aabc050160496d710b51bd9d70b6ce3d..0000000000000000000000000000000000000000 --- a/spaces/xwsm/gpt/request_llm/bridge_stackclaude.py +++ /dev/null @@ -1,296 +0,0 @@ -from .bridge_newbing import preprocess_newbing_out, preprocess_newbing_out_simple -from multiprocessing import Process, Pipe -from toolbox import update_ui, get_conf, trimmed_format_exc -import threading -import importlib -import logging -import time -from toolbox import get_conf -import asyncio -load_message = "正在加载Claude组件,请稍候..." - -try: - """ - ======================================================================== - 第一部分:Slack API Client - https://github.com/yokonsan/claude-in-slack-api - ======================================================================== - """ - - from slack_sdk.errors import SlackApiError - from slack_sdk.web.async_client import AsyncWebClient - - class SlackClient(AsyncWebClient): - """SlackClient类用于与Slack API进行交互,实现消息发送、接收等功能。 - - 属性: - - CHANNEL_ID:str类型,表示频道ID。 - - 方法: - - open_channel():异步方法。通过调用conversations_open方法打开一个频道,并将返回的频道ID保存在属性CHANNEL_ID中。 - - chat(text: str):异步方法。向已打开的频道发送一条文本消息。 - - get_slack_messages():异步方法。获取已打开频道的最新消息并返回消息列表,目前不支持历史消息查询。 - - get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。 - - """ - CHANNEL_ID = None - - async def open_channel(self): - response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID')[0]) - self.CHANNEL_ID = response["channel"]["id"] - - async def chat(self, text): - if not self.CHANNEL_ID: - raise Exception("Channel not found.") - - resp = await self.chat_postMessage(channel=self.CHANNEL_ID, text=text) - self.LAST_TS = resp["ts"] - - async def get_slack_messages(self): - try: - # TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题 - resp = await self.conversations_history(channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1) - msg = [msg for msg in resp["messages"] - if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')[0]] - return msg - except (SlackApiError, KeyError) as e: - raise RuntimeError(f"获取Slack消息失败。") - - async def get_reply(self): - while True: - slack_msgs = await self.get_slack_messages() - if len(slack_msgs) == 0: - await asyncio.sleep(0.5) - continue - - msg = slack_msgs[-1] - if msg["text"].endswith("Typing…_"): - yield False, msg["text"] - else: - yield True, msg["text"] - break -except: - pass - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" - - -class ClaudeHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.claude_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - if self.success: - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import slack_sdk - self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Claude,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_slackclaude.txt`安装Claude的依赖,然后重启程序。" - self.success = False - - def ready(self): - return self.claude_model is not None - - async def async_run(self): - await self.claude_model.open_channel() - while True: - # 等待 - kwargs = self.child.recv() - question = kwargs['query'] - history = kwargs['history'] - # system_prompt=kwargs['system_prompt'] - - # 是否重置 - if len(self.local_history) > 0 and len(history) == 0: - # await self.claude_model.reset() - self.local_history = [] - - # 开始问问题 - prompt = "" - # Slack API最好不要添加系统提示 - # if system_prompt not in self.local_history: - # self.local_history.append(system_prompt) - # prompt += system_prompt + '\n' - - # 追加历史 - for ab in history: - a, b = ab - if a not in self.local_history: - self.local_history.append(a) - prompt += a + '\n' - # if b not in self.local_history: - # self.local_history.append(b) - # prompt += b + '\n' - - # 问题 - prompt += question - self.local_history.append(question) - print('question:', prompt) - # 提交 - await self.claude_model.chat(prompt) - # 获取回复 - # async for final, response in self.claude_model.get_reply(): - # await self.handle_claude_response(final, response) - async for final, response in self.claude_model.get_reply(): - if not final: - print(response) - self.child.send(str(response)) - else: - # 防止丢失最后一条消息 - slack_msgs = await self.claude_model.get_slack_messages() - last_msg = slack_msgs[-1]["text"] if slack_msgs and len(slack_msgs) > 0 else "" - if last_msg: - self.child.send(last_msg) - print('-------- receive final ---------') - self.child.send('[Finish]') - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.claude_model is None) or (not self.success): - # 代理设置 - proxies, = get_conf('proxies') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - - try: - SLACK_CLAUDE_USER_TOKEN, = get_conf('SLACK_CLAUDE_USER_TOKEN') - self.claude_model = SlackClient(token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https) - print('Claude组件初始化成功。') - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Claude组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Claude组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] Claude失败 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() - self.parent.send(kwargs) # 发送请求到子进程 - while True: - res = self.parent.recv() # 等待Claude回复的片段 - if res == '[Finish]': - break # 结束 - elif res == '[Fail]': - self.success = False - break - else: - yield res # Claude回复的片段 - self.threadLock.release() - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global claude_handle -claude_handle = None - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global claude_handle - if (claude_handle is None) or (not claude_handle.success): - claude_handle = ClaudeHandle() - observe_window[0] = load_message + "\n\n" + claude_handle.info - if not claude_handle.success: - error = claude_handle.info - claude_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]]) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - observe_window[0] = "[Local Message]: 等待Claude响应中 ..." - for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待Claude响应中 ...")) - - global claude_handle - if (claude_handle is None) or (not claude_handle.success): - claude_handle = ClaudeHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + claude_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not claude_handle.success: - claude_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: - inputs = core_functional[additional_fn]["PreProcess"]( - inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + \ - inputs + core_functional[additional_fn]["Suffix"] - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]]) - - chatbot[-1] = (inputs, "[Local Message]: 等待Claude响应中 ...") - response = "[Local Message]: 等待Claude响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待Claude响应中 ...": - response = "[Local Message]: Claude响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") diff --git a/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp b/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/Navigation/LegacyFileMenu.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/Navigation/LegacyFileMenu.tsx deleted file mode 100644 index d798eccf2789fe1a0a9f07d0ffe6c46f1be42e78..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/Navigation/LegacyFileMenu.tsx +++ /dev/null @@ -1,80 +0,0 @@ -import { observer } from "mobx-react-lite" -import { ChangeEvent, FC } from "react" -import { Localized } from "../../../components/Localized" -import { MenuDivider, MenuItem } from "../../../components/Menu" -import { createSong, openSong, saveSong } from "../../actions" -import { useLocalization } from "../../hooks/useLocalization" -import { useStores } from "../../hooks/useStores" -import { useToast } from "../../hooks/useToast" - -const fileInputID = "OpenButtonInputFile" - -export const FileInput: FC< - React.PropsWithChildren<{ - onChange: (e: ChangeEvent) => void - }> -> = ({ onChange, children }) => ( - <> - - - -) - -export const LegacyFileMenu: FC<{ close: () => void }> = observer( - ({ close }) => { - const rootStore = useStores() - const toast = useToast() - const localized = useLocalization() - - const onClickNew = () => { - const { song } = rootStore - close() - if ( - song.isSaved || - confirm(localized("confirm-new", "Are you sure you want to continue?")) - ) { - createSong(rootStore)() - } - } - - const onClickOpen = async (e: ChangeEvent) => { - close() - try { - await openSong(rootStore)(e.currentTarget) - } catch (e) { - toast.error((e as Error).message) - } - } - - const onClickSave = () => { - close() - saveSong(rootStore)() - } - - return ( - <> - - new-song - - - - - - - open-song - - - - - save-song - - - ) - }, -) diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_nlvr.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_nlvr.py deleted file mode 100644 index 84b247bda2334c1fd894b6c11d33ef48c8e7df28..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_nlvr.py +++ /dev/null @@ -1,213 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path -import json -import pickle - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from models.blip_nlvr import blip_nlvr - -import utils -from utils import cosine_lr_schedule, warmup_lr_schedule -from data import create_dataset, create_sampler, create_loader - -def train(model, data_loader, optimizer, epoch, device, config): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=50, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - step_size = 10 - - for i,(image0, image1, text, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - loss = model(images, text, targets=targets, train=True) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - metric_logger.update(loss=loss.item()) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(model, data_loader, device, config): - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - - header = 'Evaluation:' - print_freq = 50 - - for image0, image1, text, targets in metric_logger.log_every(data_loader, print_freq, header): - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - prediction = model(images, text, targets=targets, train=False) - - _, pred_class = prediction.max(1) - accuracy = (targets==pred_class).sum() / targets.size(0) - - metric_logger.meters['acc'].update(accuracy.item(), n=image0.size(0)) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating dataset") - datasets = create_dataset('nlvr', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True,False,False], num_tasks, global_rank) - else: - samplers = [None, None, None] - - batch_size=[config['batch_size_train'],config['batch_size_test'],config['batch_size_test']] - train_loader, val_loader, test_loader = create_loader(datasets,samplers,batch_size=batch_size, - num_workers=[4,4,4],is_trains=[True,False,False], - collate_fns=[None,None,None]) - - #### Model #### - print("Creating model") - model = blip_nlvr(pretrained=config['pretrained'], image_size=config['image_size'], - vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - print("Start training") - start_time = time.time() - best = 0 - best_epoch = 0 - - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device, config) - - val_stats = evaluate(model, val_loader, device, config) - test_stats = evaluate(model, test_loader, device, config) - - if utils.is_main_process(): - if args.evaluate: - log_stats = {**{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - else: - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - 'epoch': epoch, - } - - if float(val_stats['acc'])>best: - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) - best = float(val_stats['acc']) - best_epoch = epoch - - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - if args.evaluate: - break - - dist.barrier() - - if utils.is_main_process(): - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write("best epoch: %d"%best_epoch) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/nlvr.yaml') - parser.add_argument('--output_dir', default='output/NLVR') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/ygangang/Image-Animation-using-Thin-Plate-Spline-Motion-Model/app.py b/spaces/ygangang/Image-Animation-using-Thin-Plate-Spline-Motion-Model/app.py deleted file mode 100644 index bff5f948c5574c51d35710951cfb6c0e7ca6264e..0000000000000000000000000000000000000000 --- a/spaces/ygangang/Image-Animation-using-Thin-Plate-Spline-Motion-Model/app.py +++ /dev/null @@ -1,133 +0,0 @@ -import gradio as gr -import os -import shutil -import torch -from PIL import Image -import argparse -import pathlib - -os.system("git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model") -os.chdir("Thin-Plate-Spline-Motion-Model") -os.system("mkdir checkpoints") -os.system("wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar") - - - -title = "# 图片动画" -DESCRIPTION = '''### 图片动画的Gradio实现, CVPR 2022. [Paper][Github Code] - -overview -''' -FOOTER = 'visitor badge' -ARTICLE = r""" ---- -

            点击返回智能工具箱查看更多好玩的人工智能项目

            - -``` -""" - -def get_style_image_path(style_name: str) -> str: - base_path = 'assets' - filenames = { - 'source': 'source.png', - 'driving': 'driving.mp4', - } - return f'{base_path}/{filenames[style_name]}' - - -def get_style_image_markdown_text(style_name: str) -> str: - url = get_style_image_path(style_name) - return f'style image' - - -def update_style_image(style_name: str) -> dict: - text = get_style_image_markdown_text(style_name) - return gr.Markdown.update(value=text) - - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - -def set_example_video(example: list) -> dict: - return gr.Video.update(value=example[0]) - -def inference(img,vid): - if not os.path.exists('temp'): - os.system('mkdir temp') - - img.save("temp/image.jpg", "JPEG") - os.system(f"python demo.py --config config/vox-256.yaml --checkpoint ./checkpoints/vox.pth.tar --source_image 'temp/image.jpg' --driving_video {vid} --result_video './temp/result.mp4' --cpu") - return './temp/result.mp4' - - - -def main(): - with gr.Blocks(theme="huggingface", css='style.css') as demo: - gr.Markdown(title) - gr.Markdown(DESCRIPTION) - - with gr.Box(): - gr.Markdown('''## 第1步 (上传人脸图片) -- 拖一张含人脸的图片到 **输入图片**. - - 如果图片中有多张人脸, 使用右上角的编辑按钮裁剪图片. -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_image = gr.Image(label='输入图片', - type="pil") - - with gr.Row(): - paths = sorted(pathlib.Path('assets').glob('*.png')) - example_images = gr.Dataset(components=[input_image], - samples=[[path.as_posix()] - for path in paths]) - - with gr.Box(): - gr.Markdown('''## 第2步 (选择动态视频) -- **为人脸图片选择目标视频**. -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - driving_video = gr.Video(label='目标视频', - format="mp4") - - with gr.Row(): - paths = sorted(pathlib.Path('assets').glob('*.mp4')) - example_video = gr.Dataset(components=[driving_video], - samples=[[path.as_posix()] - for path in paths]) - - with gr.Box(): - gr.Markdown('''## 第3步 (基于视频生成动态图片) -- 点击 **开始** 按钮. (注意: 由于是在CPU上运行, 生成最终结果需要花费大约3分钟.) -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - generate_button = gr.Button('开始') - - with gr.Column(): - result = gr.Video(type="file", label="输出") - gr.Markdown(FOOTER) - generate_button.click(fn=inference, - inputs=[ - input_image, - driving_video - ], - outputs=result) - example_images.click(fn=set_example_image, - inputs=example_images, - outputs=example_images.components) - example_video.click(fn=set_example_video, - inputs=example_video, - outputs=example_video.components) - - demo.launch( - enable_queue=True, - debug=True - ) - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/yixin6178/ChatPaper/run.sh b/spaces/yixin6178/ChatPaper/run.sh deleted file mode 100644 index 7b87caa9d2828fe7b1016516e8e3d00f949e1c1c..0000000000000000000000000000000000000000 --- a/spaces/yixin6178/ChatPaper/run.sh +++ /dev/null @@ -1,5 +0,0 @@ -cd /app/grobid-0.6.2 -./gradlew run & -cd /app/ -nohup python backend.py & -streamlit run frontend.py --server.address 0.0.0.0 --server.port 7860 --server.enableCORS true --server.enableXsrfProtection false \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/download.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/download.py deleted file mode 100644 index 8af3c6397b442f1016640c51b4c54cfd9921fd6a..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/download.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from argparse import ArgumentParser - -from . import BaseTransformersCLICommand - - -def download_command_factory(args): - return DownloadCommand(args.model, args.cache_dir, args.force, args.trust_remote_code) - - -class DownloadCommand(BaseTransformersCLICommand): - @staticmethod - def register_subcommand(parser: ArgumentParser): - download_parser = parser.add_parser("download") - download_parser.add_argument( - "--cache-dir", type=str, default=None, help="Path to location to store the models" - ) - download_parser.add_argument( - "--force", action="store_true", help="Force the model to be download even if already in cache-dir" - ) - download_parser.add_argument( - "--trust-remote-code", - action="store_true", - help="Whether or not to allow for custom models defined on the Hub in their own modeling files. Use only if you've reviewed the code as it will execute on your local machine", - ) - download_parser.add_argument("model", type=str, help="Name of the model to download") - download_parser.set_defaults(func=download_command_factory) - - def __init__(self, model: str, cache: str, force: bool, trust_remote_code: bool): - self._model = model - self._cache = cache - self._force = force - self._trust_remote_code = trust_remote_code - - def run(self): - from ..models.auto import AutoModel, AutoTokenizer - - AutoModel.from_pretrained( - self._model, cache_dir=self._cache, force_download=self._force, trust_remote_code=self._trust_remote_code - ) - AutoTokenizer.from_pretrained( - self._model, cache_dir=self._cache, force_download=self._force, trust_remote_code=self._trust_remote_code - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/modeling_mobilenet_v1.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/modeling_mobilenet_v1.py deleted file mode 100644 index 3963e60f3562bd9608581470c8b8b33a395ebaa1..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/modeling_mobilenet_v1.py +++ /dev/null @@ -1,486 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Apple Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch MobileNetV1 model.""" - - -from typing import Optional, Union - -import torch -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import BaseModelOutputWithPoolingAndNoAttention, ImageClassifierOutputWithNoAttention -from ...modeling_utils import PreTrainedModel -from ...utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, logging -from .configuration_mobilenet_v1 import MobileNetV1Config - - -logger = logging.get_logger(__name__) - - -# General docstring -_CONFIG_FOR_DOC = "MobileNetV1Config" - -# Base docstring -_CHECKPOINT_FOR_DOC = "google/mobilenet_v1_1.0_224" -_EXPECTED_OUTPUT_SHAPE = [1, 1024, 7, 7] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "google/mobilenet_v1_1.0_224" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - - -MOBILENET_V1_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "google/mobilenet_v1_1.0_224", - "google/mobilenet_v1_0.75_192", - # See all MobileNetV1 models at https://huggingface.co/models?filter=mobilenet_v1 -] - - -def _build_tf_to_pytorch_map(model, config, tf_weights=None): - """ - A map of modules from TF to PyTorch. - """ - - tf_to_pt_map = {} - - if isinstance(model, MobileNetV1ForImageClassification): - backbone = model.mobilenet_v1 - else: - backbone = model - - prefix = "MobilenetV1/Conv2d_0/" - tf_to_pt_map[prefix + "weights"] = backbone.conv_stem.convolution.weight - tf_to_pt_map[prefix + "BatchNorm/beta"] = backbone.conv_stem.normalization.bias - tf_to_pt_map[prefix + "BatchNorm/gamma"] = backbone.conv_stem.normalization.weight - tf_to_pt_map[prefix + "BatchNorm/moving_mean"] = backbone.conv_stem.normalization.running_mean - tf_to_pt_map[prefix + "BatchNorm/moving_variance"] = backbone.conv_stem.normalization.running_var - - for i in range(13): - tf_index = i + 1 - pt_index = i * 2 - - pointer = backbone.layer[pt_index] - prefix = f"MobilenetV1/Conv2d_{tf_index}_depthwise/" - tf_to_pt_map[prefix + "depthwise_weights"] = pointer.convolution.weight - tf_to_pt_map[prefix + "BatchNorm/beta"] = pointer.normalization.bias - tf_to_pt_map[prefix + "BatchNorm/gamma"] = pointer.normalization.weight - tf_to_pt_map[prefix + "BatchNorm/moving_mean"] = pointer.normalization.running_mean - tf_to_pt_map[prefix + "BatchNorm/moving_variance"] = pointer.normalization.running_var - - pointer = backbone.layer[pt_index + 1] - prefix = f"MobilenetV1/Conv2d_{tf_index}_pointwise/" - tf_to_pt_map[prefix + "weights"] = pointer.convolution.weight - tf_to_pt_map[prefix + "BatchNorm/beta"] = pointer.normalization.bias - tf_to_pt_map[prefix + "BatchNorm/gamma"] = pointer.normalization.weight - tf_to_pt_map[prefix + "BatchNorm/moving_mean"] = pointer.normalization.running_mean - tf_to_pt_map[prefix + "BatchNorm/moving_variance"] = pointer.normalization.running_var - - if isinstance(model, MobileNetV1ForImageClassification): - prefix = "MobilenetV1/Logits/Conv2d_1c_1x1/" - tf_to_pt_map[prefix + "weights"] = model.classifier.weight - tf_to_pt_map[prefix + "biases"] = model.classifier.bias - - return tf_to_pt_map - - -def load_tf_weights_in_mobilenet_v1(model, config, tf_checkpoint_path): - """Load TensorFlow checkpoints in a PyTorch model.""" - try: - import numpy as np - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow models in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - - # Load weights from TF model - init_vars = tf.train.list_variables(tf_checkpoint_path) - tf_weights = {} - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_checkpoint_path, name) - tf_weights[name] = array - - # Build TF to PyTorch weights loading map - tf_to_pt_map = _build_tf_to_pytorch_map(model, config, tf_weights) - - for name, pointer in tf_to_pt_map.items(): - logger.info(f"Importing {name}") - if name not in tf_weights: - logger.info(f"{name} not in tf pre-trained weights, skipping") - continue - - array = tf_weights[name] - - if "depthwise_weights" in name: - logger.info("Transposing depthwise") - array = np.transpose(array, (2, 3, 0, 1)) - elif "weights" in name: - logger.info("Transposing") - if len(pointer.shape) == 2: # copying into linear layer - array = array.squeeze().transpose() - else: - array = np.transpose(array, (3, 2, 0, 1)) - - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - - logger.info(f"Initialize PyTorch weight {name} {array.shape}") - pointer.data = torch.from_numpy(array) - - tf_weights.pop(name, None) - tf_weights.pop(name + "/RMSProp", None) - tf_weights.pop(name + "/RMSProp_1", None) - tf_weights.pop(name + "/ExponentialMovingAverage", None) - - logger.info(f"Weights not copied to PyTorch model: {', '.join(tf_weights.keys())}") - return model - - -def apply_tf_padding(features: torch.Tensor, conv_layer: nn.Conv2d) -> torch.Tensor: - """ - Apply TensorFlow-style "SAME" padding to a convolution layer. See the notes at: - https://www.tensorflow.org/api_docs/python/tf/nn#notes_on_padding_2 - """ - in_height, in_width = features.shape[-2:] - stride_height, stride_width = conv_layer.stride - kernel_height, kernel_width = conv_layer.kernel_size - - if in_height % stride_height == 0: - pad_along_height = max(kernel_height - stride_height, 0) - else: - pad_along_height = max(kernel_height - (in_height % stride_height), 0) - - if in_width % stride_width == 0: - pad_along_width = max(kernel_width - stride_width, 0) - else: - pad_along_width = max(kernel_width - (in_width % stride_width), 0) - - pad_left = pad_along_width // 2 - pad_right = pad_along_width - pad_left - pad_top = pad_along_height // 2 - pad_bottom = pad_along_height - pad_top - - padding = (pad_left, pad_right, pad_top, pad_bottom) - return nn.functional.pad(features, padding, "constant", 0.0) - - -class MobileNetV1ConvLayer(nn.Module): - def __init__( - self, - config: MobileNetV1Config, - in_channels: int, - out_channels: int, - kernel_size: int, - stride: Optional[int] = 1, - groups: Optional[int] = 1, - bias: bool = False, - use_normalization: Optional[bool] = True, - use_activation: Optional[bool or str] = True, - ) -> None: - super().__init__() - self.config = config - - if in_channels % groups != 0: - raise ValueError(f"Input channels ({in_channels}) are not divisible by {groups} groups.") - if out_channels % groups != 0: - raise ValueError(f"Output channels ({out_channels}) are not divisible by {groups} groups.") - - padding = 0 if config.tf_padding else int((kernel_size - 1) / 2) - - self.convolution = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - groups=groups, - bias=bias, - padding_mode="zeros", - ) - - if use_normalization: - self.normalization = nn.BatchNorm2d( - num_features=out_channels, - eps=config.layer_norm_eps, - momentum=0.9997, - affine=True, - track_running_stats=True, - ) - else: - self.normalization = None - - if use_activation: - if isinstance(use_activation, str): - self.activation = ACT2FN[use_activation] - elif isinstance(config.hidden_act, str): - self.activation = ACT2FN[config.hidden_act] - else: - self.activation = config.hidden_act - else: - self.activation = None - - def forward(self, features: torch.Tensor) -> torch.Tensor: - if self.config.tf_padding: - features = apply_tf_padding(features, self.convolution) - features = self.convolution(features) - if self.normalization is not None: - features = self.normalization(features) - if self.activation is not None: - features = self.activation(features) - return features - - -class MobileNetV1PreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = MobileNetV1Config - load_tf_weights = load_tf_weights_in_mobilenet_v1 - base_model_prefix = "mobilenet_v1" - main_input_name = "pixel_values" - supports_gradient_checkpointing = False - - def _init_weights(self, module: Union[nn.Linear, nn.Conv2d]) -> None: - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.BatchNorm2d): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -MOBILENET_V1_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`MobileNetV1Config`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MOBILENET_V1_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`MobileNetV1ImageProcessor.__call__`] for details. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare MobileNetV1 model outputting raw hidden-states without any specific head on top.", - MOBILENET_V1_START_DOCSTRING, -) -class MobileNetV1Model(MobileNetV1PreTrainedModel): - def __init__(self, config: MobileNetV1Config, add_pooling_layer: bool = True): - super().__init__(config) - self.config = config - - depth = 32 - out_channels = max(int(depth * config.depth_multiplier), config.min_depth) - - self.conv_stem = MobileNetV1ConvLayer( - config, - in_channels=config.num_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - ) - - strides = [1, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1, 2, 1] - - self.layer = nn.ModuleList() - for i in range(13): - in_channels = out_channels - - if strides[i] == 2 or i == 0: - depth *= 2 - out_channels = max(int(depth * config.depth_multiplier), config.min_depth) - - self.layer.append( - MobileNetV1ConvLayer( - config, - in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - stride=strides[i], - groups=in_channels, - ) - ) - - self.layer.append( - MobileNetV1ConvLayer( - config, - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - ) - ) - - self.pooler = nn.AdaptiveAvgPool2d((1, 1)) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def _prune_heads(self, heads_to_prune): - raise NotImplementedError - - @add_start_docstrings_to_model_forward(MOBILENET_V1_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndNoAttention, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[tuple, BaseModelOutputWithPoolingAndNoAttention]: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - hidden_states = self.conv_stem(pixel_values) - - all_hidden_states = () if output_hidden_states else None - - for i, layer_module in enumerate(self.layer): - hidden_states = layer_module(hidden_states) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - last_hidden_state = hidden_states - - if self.pooler is not None: - pooled_output = torch.flatten(self.pooler(last_hidden_state), start_dim=1) - else: - pooled_output = None - - if not return_dict: - return tuple(v for v in [last_hidden_state, pooled_output, all_hidden_states] if v is not None) - - return BaseModelOutputWithPoolingAndNoAttention( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=all_hidden_states, - ) - - -@add_start_docstrings( - """ - MobileNetV1 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for - ImageNet. - """, - MOBILENET_V1_START_DOCSTRING, -) -class MobileNetV1ForImageClassification(MobileNetV1PreTrainedModel): - def __init__(self, config: MobileNetV1Config) -> None: - super().__init__(config) - - self.num_labels = config.num_labels - self.mobilenet_v1 = MobileNetV1Model(config) - - last_hidden_size = self.mobilenet_v1.layer[-1].convolution.out_channels - - # Classifier head - self.dropout = nn.Dropout(config.classifier_dropout_prob, inplace=True) - self.classifier = nn.Linear(last_hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(MOBILENET_V1_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=ImageClassifierOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - labels: Optional[torch.Tensor] = None, - return_dict: Optional[bool] = None, - ) -> Union[tuple, ImageClassifierOutputWithNoAttention]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss). If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.mobilenet_v1(pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict) - - pooled_output = outputs.pooler_output if return_dict else outputs[1] - - logits = self.classifier(self.dropout(pooled_output)) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return ImageClassifierOutputWithNoAttention( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - ) diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/HubertSoft_Onnx.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/HubertSoft_Onnx.py deleted file mode 100644 index 06f10a4ca79c429ed59ab9743578128e8db506cc..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/HubertSoft_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class HubertSoft_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/hubert-soft.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py deleted file mode 100644 index 1989dfcd0855d833a75e403f6a5e88725d78022f..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/solver/build.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import logging -from collections import defaultdict -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union -import torch -from fvcore.common.param_scheduler import CosineParamScheduler, MultiStepParamScheduler - -from detectron2.config import CfgNode - -from .lr_scheduler import LRMultiplier, WarmupParamScheduler - -_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]] -_GradientClipper = Callable[[_GradientClipperInput], None] - - -class GradientClipType(Enum): - VALUE = "value" - NORM = "norm" - - -def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper: - """ - Creates gradient clipping closure to clip by value or by norm, - according to the provided config. - """ - cfg = copy.deepcopy(cfg) - - def clip_grad_norm(p: _GradientClipperInput): - torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE) - - def clip_grad_value(p: _GradientClipperInput): - torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE) - - _GRADIENT_CLIP_TYPE_TO_CLIPPER = { - GradientClipType.VALUE: clip_grad_value, - GradientClipType.NORM: clip_grad_norm, - } - return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)] - - -def _generate_optimizer_class_with_gradient_clipping( - optimizer: Type[torch.optim.Optimizer], - *, - per_param_clipper: Optional[_GradientClipper] = None, - global_clipper: Optional[_GradientClipper] = None, -) -> Type[torch.optim.Optimizer]: - """ - Dynamically creates a new type that inherits the type of a given instance - and overrides the `step` method to add gradient clipping - """ - assert ( - per_param_clipper is None or global_clipper is None - ), "Not allowed to use both per-parameter clipping and global clipping" - - def optimizer_wgc_step(self, closure=None): - if per_param_clipper is not None: - for group in self.param_groups: - for p in group["params"]: - per_param_clipper(p) - else: - # global clipper for future use with detr - # (https://github.com/facebookresearch/detr/pull/287) - all_params = itertools.chain(*[g["params"] for g in self.param_groups]) - global_clipper(all_params) - super(type(self), self).step(closure) - - OptimizerWithGradientClip = type( - optimizer.__name__ + "WithGradientClip", - (optimizer,), - {"step": optimizer_wgc_step}, - ) - return OptimizerWithGradientClip - - -def maybe_add_gradient_clipping( - cfg: CfgNode, optimizer: Type[torch.optim.Optimizer] -) -> Type[torch.optim.Optimizer]: - """ - If gradient clipping is enabled through config options, wraps the existing - optimizer type to become a new dynamically created class OptimizerWithGradientClip - that inherits the given optimizer and overrides the `step` method to - include gradient clipping. - - Args: - cfg: CfgNode, configuration options - optimizer: type. A subclass of torch.optim.Optimizer - - Return: - type: either the input `optimizer` (if gradient clipping is disabled), or - a subclass of it with gradient clipping included in the `step` method. - """ - if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED: - return optimizer - if isinstance(optimizer, torch.optim.Optimizer): - optimizer_type = type(optimizer) - else: - assert issubclass(optimizer, torch.optim.Optimizer), optimizer - optimizer_type = optimizer - - grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS) - OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping( - optimizer_type, per_param_clipper=grad_clipper - ) - if isinstance(optimizer, torch.optim.Optimizer): - optimizer.__class__ = OptimizerWithGradientClip # a bit hacky, not recommended - return optimizer - else: - return OptimizerWithGradientClip - - -def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - """ - Build an optimizer from config. - """ - params = get_default_optimizer_params( - model, - base_lr=cfg.SOLVER.BASE_LR, - weight_decay_norm=cfg.SOLVER.WEIGHT_DECAY_NORM, - bias_lr_factor=cfg.SOLVER.BIAS_LR_FACTOR, - weight_decay_bias=cfg.SOLVER.WEIGHT_DECAY_BIAS, - ) - return maybe_add_gradient_clipping(cfg, torch.optim.SGD)( - params, - lr=cfg.SOLVER.BASE_LR, - momentum=cfg.SOLVER.MOMENTUM, - nesterov=cfg.SOLVER.NESTEROV, - weight_decay=cfg.SOLVER.WEIGHT_DECAY, - ) - - -def get_default_optimizer_params( - model: torch.nn.Module, - base_lr: Optional[float] = None, - weight_decay: Optional[float] = None, - weight_decay_norm: Optional[float] = None, - bias_lr_factor: Optional[float] = 1.0, - weight_decay_bias: Optional[float] = None, - overrides: Optional[Dict[str, Dict[str, float]]] = None, -) -> List[Dict[str, Any]]: - """ - Get default param list for optimizer, with support for a few types of - overrides. If no overrides needed, this is equivalent to `model.parameters()`. - - Args: - base_lr: lr for every group by default. Can be omitted to use the one in optimizer. - weight_decay: weight decay for every group by default. Can be omitted to use the one - in optimizer. - weight_decay_norm: override weight decay for params in normalization layers - bias_lr_factor: multiplier of lr for bias parameters. - weight_decay_bias: override weight decay for bias parameters - overrides: if not `None`, provides values for optimizer hyperparameters - (LR, weight decay) for module parameters with a given name; e.g. - ``{"embedding": {"lr": 0.01, "weight_decay": 0.1}}`` will set the LR and - weight decay values for all module parameters named `embedding`. - - For common detection models, ``weight_decay_norm`` is the only option - needed to be set. ``bias_lr_factor,weight_decay_bias`` are legacy settings - from Detectron1 that are not found useful. - - Example: - :: - torch.optim.SGD(get_default_optimizer_params(model, weight_decay_norm=0), - lr=0.01, weight_decay=1e-4, momentum=0.9) - """ - if overrides is None: - overrides = {} - defaults = {} - if base_lr is not None: - defaults["lr"] = base_lr - if weight_decay is not None: - defaults["weight_decay"] = weight_decay - bias_overrides = {} - if bias_lr_factor is not None and bias_lr_factor != 1.0: - # NOTE: unlike Detectron v1, we now by default make bias hyperparameters - # exactly the same as regular weights. - if base_lr is None: - raise ValueError("bias_lr_factor requires base_lr") - bias_overrides["lr"] = base_lr * bias_lr_factor - if weight_decay_bias is not None: - bias_overrides["weight_decay"] = weight_decay_bias - if len(bias_overrides): - if "bias" in overrides: - raise ValueError("Conflicting overrides for 'bias'") - overrides["bias"] = bias_overrides - - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - for module in model.modules(): - for module_param_name, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - - hyperparams = copy.copy(defaults) - if isinstance(module, norm_module_types) and weight_decay_norm is not None: - hyperparams["weight_decay"] = weight_decay_norm - hyperparams.update(overrides.get(module_param_name, {})) - params.append({"params": [value], **hyperparams}) - return reduce_param_groups(params) - - -def _expand_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]: - # Transform parameter groups into per-parameter structure. - # Later items in `params` can overwrite parameters set in previous items. - ret = defaultdict(dict) - for item in params: - assert "params" in item - cur_params = {x: y for x, y in item.items() if x != "params"} - for param in item["params"]: - ret[param].update({"params": [param], **cur_params}) - return list(ret.values()) - - -def reduce_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, Any]]: - # Reorganize the parameter groups and merge duplicated groups. - # The number of parameter groups needs to be as small as possible in order - # to efficiently use the PyTorch multi-tensor optimizer. Therefore instead - # of using a parameter_group per single parameter, we reorganize the - # parameter groups and merge duplicated groups. This approach speeds - # up multi-tensor optimizer significantly. - params = _expand_param_groups(params) - groups = defaultdict(list) # re-group all parameter groups by their hyperparams - for item in params: - cur_params = tuple((x, y) for x, y in item.items() if x != "params") - groups[cur_params].extend(item["params"]) - ret = [] - for param_keys, param_values in groups.items(): - cur = {kv[0]: kv[1] for kv in param_keys} - cur["params"] = param_values - ret.append(cur) - return ret - - -def build_lr_scheduler( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.lr_scheduler._LRScheduler: - """ - Build a LR scheduler from config. - """ - name = cfg.SOLVER.LR_SCHEDULER_NAME - - if name == "WarmupMultiStepLR": - steps = [x for x in cfg.SOLVER.STEPS if x <= cfg.SOLVER.MAX_ITER] - if len(steps) != len(cfg.SOLVER.STEPS): - logger = logging.getLogger(__name__) - logger.warning( - "SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. " - "These values will be ignored." - ) - sched = MultiStepParamScheduler( - values=[cfg.SOLVER.GAMMA ** k for k in range(len(steps) + 1)], - milestones=steps, - num_updates=cfg.SOLVER.MAX_ITER, - ) - elif name == "WarmupCosineLR": - sched = CosineParamScheduler(1, 0) - else: - raise ValueError("Unknown LR scheduler: {}".format(name)) - - sched = WarmupParamScheduler( - sched, - cfg.SOLVER.WARMUP_FACTOR, - min(cfg.SOLVER.WARMUP_ITERS / cfg.SOLVER.MAX_ITER, 1.0), - cfg.SOLVER.WARMUP_METHOD, - ) - return LRMultiplier(optimizer, multiplier=sched, max_iter=cfg.SOLVER.MAX_ITER) diff --git a/spaces/zej97/AI-Research-Assistant/test/test3.py b/spaces/zej97/AI-Research-Assistant/test/test3.py deleted file mode 100644 index b19587e7b045df07ae61eaec8949524850600394..0000000000000000000000000000000000000000 --- a/spaces/zej97/AI-Research-Assistant/test/test3.py +++ /dev/null @@ -1,21 +0,0 @@ -import openai - -openai.api_key = "sk-DQ1nFYzAVzGMznofdi0nig7MebfA9PWrTxCHlLIZIqc4X8xu" -openai.api_base = "https://api.chatanywhere.cn/v1" - -def generator(): - messages = [{ - "role": "user", - "content": "What is the meaning of life?", - }] - response = "" - for chunk in openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.9, - stream=True, - ): - content = chunk["choices"][0].get("delta", {}).get("content") - if content: - response += content - yield response \ No newline at end of file diff --git a/spaces/zhan66/vits-simple-api/static/css/style.css b/spaces/zhan66/vits-simple-api/static/css/style.css deleted file mode 100644 index 275ec332c1708e619b30a1fb9df2a1fd9ca45799..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/static/css/style.css +++ /dev/null @@ -1,84 +0,0 @@ -.main-container { - position: relative; - width: 100%; - min-height: 300px; -} - -.container { - width: 300px; - position: relative; -} - - -/*tabs*/ -.tabs { - display: flex; - left: 0; -} - -.tab-button { - display: inline-block; - background-color: transparent; - padding: 5px 10px; - cursor: pointer; - margin-bottom: -2px; - border-top: 2px solid transparent; - border-left: 2px solid transparent; - border-right: 2px solid transparent; - border-bottom: 0px; - border-top-left-radius: 0.5rem; - border-top-right-radius: 0.5rem; - color: gray; -} - -.tab-button.active { - background-color: white; - border-top: 2px solid #dee2e6; - border-left: 2px solid #dee2e6; - border-right: 2px solid #dee2e6; - color: black; -} - -/*content*/ - -.content { - border: gray; - border-left-width: 2px; -} - -.content-pane { - display: none; - padding: 20px; -} - -.content-pane.active { - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; -} - -*, :before, :after { - box-sizing: border-box; - border-width: 0; - border-style: solid; - border-color: #e5e7eb; -} - - -.flex { - display: flex; -} - -.border-transparent { - border-color: transparent; -} - -.border-b-2 { - border-bottom: 2px solid #dee2e6; -} - -.border-lr-2 { - border-left: 2px solid #dee2e6; - border-right: 2px solid #dee2e6; -} - diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/ui/voice/index.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
            - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
            - ) - })} -
            - ) -} diff --git a/spaces/zixian/Zhenhuan-VITS/models.py b/spaces/zixian/Zhenhuan-VITS/models.py deleted file mode 100644 index 1b6d10a19f5ec9b09e79e3c4c3e3fcd5db9db33b..0000000000000000000000000000000000000000 --- a/spaces/zixian/Zhenhuan-VITS/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -#import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/zomehwh/sovits-rudolf/vdecoder/hifigan/env.py b/spaces/zomehwh/sovits-rudolf/vdecoder/hifigan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-rudolf/vdecoder/hifigan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name))