diff --git a/spaces/01zhangclare/bingai/README.md b/spaces/01zhangclare/bingai/README.md deleted file mode 100644 index 5335846e5f2237fbb6330f9ac568e26b8240ce9a..0000000000000000000000000000000000000000 --- a/spaces/01zhangclare/bingai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bingai -emoji: 🏃 -colorFrom: indigo -colorTo: purple -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ableton Live 10.1.1 Crack Activation Number The Secret to Free Music Creation.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ableton Live 10.1.1 Crack Activation Number The Secret to Free Music Creation.md deleted file mode 100644 index 170eca60363441eb902703b19d1c0402a12ce119..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ableton Live 10.1.1 Crack Activation Number The Secret to Free Music Creation.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Ableton Live 10.1.1 Crack Activation Number Free Download 2020

-

If you are a music producer, DJ, or performer, you might have heard of Ableton Live, one of the most popular and powerful digital audio workstations (DAWs) in the market. But did you know that you can get Ableton Live 10.1.1 Crack for free and enjoy all its features without paying a dime? In this article, we will tell you everything you need to know about Ableton Live 10.1.1 Crack, including what it is, why you need it, how to download and install it, and what are its main features.

-

Introduction

-

What is Ableton Live?

-

Ableton Live is a DAW that allows you to create, record, edit, mix, and perform music in a creative and intuitive way. It is designed for both studio and live use, and it supports a wide range of audio and MIDI formats, devices, and controllers. You can use Ableton Live to produce any kind of music genre, from electronic to acoustic, from hip hop to rock, from ambient to techno.

-

Ableton Live 10.1.1 Crack Activation Number Free Download 2020


Download ✒ ✒ ✒ https://byltly.com/2uKxPB



-

What is Ableton Live 10.1.1 Crack?

-

Ableton Live 10.1.1 Crack is a modified version of Ableton Live that bypasses the software's protection system and allows you to use it without a license key or activation code. It is also known as a patch or a keygen, and it is usually distributed by hackers or crackers on various websites and forums.

-

Why do you need Ableton Live 10.1.1 Crack?

-

Ableton Live is not a cheap software. The official price for the standard edition is $449, while the suite edition costs $749. If you want to upgrade from an older version or from another DAW, you still have to pay a significant amount of money. Moreover, if you want to use Ableton Live on more than one computer, you have to buy multiple licenses or use an online authorization system that can be inconvenient or unreliable.

-

That's why many people look for Ableton Live 10.1.1 Crack online, hoping to get the full version of the software for free and without any limitations or restrictions.

-

Features of Ableton Live 10.1.1 Crack

-

Live Performance Mode

-

One of the most distinctive features of Ableton Live is its live performance mode, also known as Session View. In this mode, you can arrange your music into clips and scenes that can be triggered in any order and combination, creating dynamic and spontaneous compositions on the fly.

-

You can also record new clips from audio or MIDI inputs, loop them, edit them, apply effects, and manipulate them in real time using various controls and parameters.

-

Live performance mode is ideal for improvising, jamming, remixing, DJing, or performing live on stage.

-

Audio and MIDI Editing

-

Ableton Live also has a powerful audio and MIDI editing mode, also known as Arrangement View. In this mode, you can record your music in a linear timeline, edit it with precision and flexibility, and arrange it into a complete song or track.

-

You can also use various tools and functions to manipulate your audio and MIDI clips, such as warping, slicing, quantizing, transposing, stretching, cropping, fading,

-

Ableton Live 10.1.1 Crack Serial Key Full Version Download
-How to Activate Ableton Live 10.1.1 for Free with Crack
-Download Ableton Live 10.1.1 Crack + Activation Code 2020
-Ableton Live 10.1.1 Crack License Key Generator Free Download
-Ableton Live 10.1.1 Crack + Keygen Torrent Download 2020
-Ableton Live 10.1.1 Crack Registration Code Free Download
-Ableton Live 10.1.1 Crack Patch Full Version Free Download
-Ableton Live 10.1.1 Crack Product Key Free Download 2020
-Ableton Live 10.1.1 Crack + Activation Number Torrent 2020
-Ableton Live 10.1.1 Crack + Serial Number Free Download
-Ableton Live 10.1.1 Crack Full Version Download for Windows/Mac
-Ableton Live 10.1.1 Crack + Activation Number Download for PC
-Ableton Live 10.1.1 Crack Free Download with Activation Number
-Ableton Live 10.1.1 Crack Full Version with Serial Key 2020
-Ableton Live 10.1.1 Crack + Keygen Free Download for Windows 10
-Ableton Live 10.1.1 Crack Activation Number + License Key 2020
-Ableton Live 10.1.1 Crack + Registration Code Full Version Download
-Ableton Live 10.1.1 Crack + Product Key Full Version Download
-Ableton Live 10.1.1 Crack + Patch Free Download for Mac OS X
-Ableton Live 10.1.1 Crack Activation Number + Keygen Download
-Ableton Live 10.1.1 Crack Serial Key + Activation Code Download
-Ableton Live 10.1.1 Crack License Key + Registration Code Download
-Ableton Live 10.1.1 Crack Keygen + Patch Full Version Download
-Ableton Live 10.1.1 Crack Product Key + Serial Number Download
-Ableton Live 10.1.1 Crack Activation Number + Torrent Download
-How to Install and Activate Ableton Live 10.1.1 with Crack
-Download and Activate Ableton Live 10.1.1 with Serial Key
-How to Get Ableton Live 10.1.1 for Free with Activation Code
-Download and Activate Ableton Live 10.1.1 with License Key
-How to Get Ableton Live 10.1.1 for Free with Keygen
-Download and Activate Ableton Live 10.1.1 with Registration Code
-How to Get Ableton Live 10.1.1 for Free with Product Key
-Download and Activate Ableton Live 10.1.1 with Patch
-How to Get Ableton Live 10.1.1 for Free with Torrent
-How to Use Ableton Live 10.1.1 with Crack Full Version
-How to Use Ableton Live 10.1.1 with Serial Key Full Version
-How to Use Ableton Live 10.1.1 with Activation Code Full Version
-How to Use Ableton Live 10.1

reversing,

consolidating,

freezing,

flattening,

grouping,

automation,

envelopes,

markers,

etc.

-

Instruments and Effects

-

Ableton Live comes with a rich collection of instruments and effects that you can use to create any sound you want.

-

The instruments include synthesizers,

samplers,

drum machines,

electric pianos,

organs,

guitars,

strings,

brass,

etc.

-

The effects include filters,

compressors,

EQs,

reverbs,

delays,

distortions,

modulations,

etc.

-

You can also use third-party VST or AU plugins to expand your sonic palette even further.

-

Workflow Enhancements

-

Ableton Live 10.1.1 Crack also introduces some new features and improvements that enhance your workflow and productivity.

-

Some of these features are:

- - A new user interface that is more streamlined,

modern,

and customizable - A new browser that is faster,

smarter,

and more organized - A new Wavetable synth that offers versatile,

complex,

, and expressive sounds - A new Echo effect that combines analog-style delay with modulation,

feedback,

, and distortion - A new Drum Buss effect that adds punch,

, warmth,

, and drive to your drums - A new Pedal effect that emulates classic guitar pedals such as overdrive,

, fuzz,

, and distortion - A new Capture function that records your MIDI input even when you are not recording - A new Note Chasing function that plays MIDI notes even when they start before the playback position - A new Multi-Clip Editing function that allows you to edit multiple MIDI clips at once - A new Groups Within Groups function that allows you to nest track groups for better organization - A new Automation Shapes function that allows you to draw curves,

, ramps,

, steps,

, etc. - A new Arrangement Editing function that allows you to edit multiple clips at once in Arrangement View - A new Collections function that allows you to color-code your favorite items in the browser - A new Export Return Tracks function that allows you to export individual return tracks as separate audio files

How to download and install Ableton Live 10.1.1 Crack?

-

Step 1: Download the setup file from the link below

-

The first step is to download the setup file for Ableton Live 10.1.1 Crack from the link provided below:

- [Download Ableton Live 10.1.1 Crack](https://example.com/download)

This link will take you to a secure website where you can download the file without any viruses or malware.

-

Step 2: Extract the file and run the installer

-

The next step is to extract the file using a program like WinRAR or 7-Zip.

-

You will get a folder containing two files: one is the installer for Ableton Live 10.1.1 (setup.msi),

and the other is the crack file (Ableton_Keygen.exe).

-

To install Ableton Live 10.1.1,

double-click on the setup.msi file

and follow the instructions on the screen.

-

You can choose any installation path

and any components

you want.

-

The installation process may take some time

depending on your system specifications.

-

Step 3: Copy the crack file and paste it into the installation folder

-

The final step is to copy

the crack file (Ableton_Keygen.exe)

and paste

it into

the installation folder

of Ableton Live 10.1.1.

-

The installation folder

is usually located at C:\Program Files\Ableton\Live 10 Suite.

-

If you chose a different installation path

in step 2,

you have to find

the folder where

you installed

Ableton Live 10.

- click on the file and select Copy,

then go to the installation folder,

right-click on an empty space and select Paste.

-

Step 4: Launch the program and enter the activation number

-

The last step is to launch the program and enter the activation number.

-

To launch the program,

double-click on the Ableton Live 10 icon

on your desktop

or in your start menu.

-

To enter the activation number,

run the crack file (Ableton_Keygen.exe)

that you copied in step 3.

-

You will see a window

with a button that says Generate.

-

Click on that button

and you will get a random activation number

that you can copy

and paste

into the program.

-

Click on Register

and you are done!

-

Conclusion

-

Summary of the main points

-

In this article,

we have shown you

how to download and install Ableton Live 10.1.1 Crack

for free

and without any limitations or restrictions.

-

We have also explained

what Ableton Live is,

why you need it,

and what are its main features.

-

Benefits of using Ableton Live 10.1.1 Crack

-

By using Ableton Live 10.1.1 Crack,

you can enjoy all the benefits of Ableton Live,

such as:

- - Creating, recording, editing, mixing, and performing music in a creative and intuitive way - Using a wide range of audio and MIDI formats, devices, and controllers - Producing any kind of music genre, from electronic to acoustic, from hip hop to rock, from ambient to techno - Improvising, jamming, remixing, DJing, or performing live on stage - Using a rich collection of instruments and effects to create any sound you want - Using third-party VST or AU plugins to expand your sonic palette even further - Enhancing your workflow and productivity with new features and improvements

Call to action

-

If you are interested in using Ableton Live 10.1.1 Crack,

don't hesitate to download it from the link below:

- [Download Ableton Live 10.1.1 Crack](https://example.com/download)

But hurry up,

because this offer may not last long!

-

Download Ableton Live 10.1.1 Crack today

and unleash your musical potential!

-

FAQs

-

Is Ableton Live 10.1.1 Crack safe to use?

-

Ableton Live 10.1.1 Crack is safe to use as long as you download it from a reliable source,

such as the one we have provided in this article.

-

However,

we cannot guarantee that other sources

are trustworthy or virus-free,

so be careful when downloading files from unknown websites or forums.

-

Is Ableton Live 10.1.1 Crack legal to use?

-

Ableton Live 10.1.1 Crack is not legal to use,

as it violates the terms and conditions of Ableton Live's license agreement.

-

By using Ableton Live 10.1.1 Crack,

you are infringing the intellectual property rights of Ableton AG,

the company that owns and develops Ableton Live.

-

We do not condone or encourage piracy or illegal use of software,

and we are not responsible for any consequences that may arise from using Ableton Live 10.1.1 Crack.

-

Will Ableton Live 10.1.1 Crack work on my computer?

-

Ableton Live 10.1.1 Crack will work on any computer that meets the minimum system requirements for Ableton Live 10.

-

The minimum system requirements are:

- - Windows 7 (SP1), Windows 8 or Windows 10 (64-bit) - Intel® Core™ i5 processor or an AMD multi-core processor - 4 GB RAM (8 GB or more recommended) - 1366x768 display resolution - ASIO compatible audio hardware for Link support (also recommended for optimal audio performance) - Approximately 3 GB disk space on the system drive for the basic installation (8 GB free disk space recommended) - Up to 76 GB disk space for additionally available sound content

Can I update Ableton Live 10.1.1 Crack?

-

No, you cannot update Ableton Live 10.1.1 Crack,

as it will overwrite the crack file

and deactivate the program.

-

If you want to use the latest version of Ableton Live,

you have to buy a license key

or wait for a new crack file

to be released by hackers or crackers.

-

Can I use Ableton Live 10.1.1 Crack online?

-

No, you cannot use Ableton Live 10.1.1 Crack online,

as it will detect that you are using a cracked version

and block your access to online features

, such as:

- - Link: a technology that keeps devices in time over a local network - Max for Live: a platform that lets you build your own instruments and effects - Packs: additional sounds and presets for Ableton Live - Push: a hardware controller designed for Ableton Live - Support: technical assistance and customer service

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Armada Balas Dendam Full Album The Debut Album of the Former Kertas Band.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Armada Balas Dendam Full Album The Debut Album of the Former Kertas Band.md deleted file mode 100644 index 59b704c6347daecbe7235ed45f2a42b2b0cd12ab..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Armada Balas Dendam Full Album The Debut Album of the Former Kertas Band.md +++ /dev/null @@ -1,111 +0,0 @@ - -

Armada Balas Dendam Full Album Free Download: A Guide for Fans of Indonesian Pop Rock Music

-

If you are a fan of Indonesian pop rock music, you have probably heard of Armada, one of the most popular bands in the country. But have you listened to their second album, Balas Dendam, which was released in 2008? If not, you are missing out on a masterpiece that showcases their talent, creativity, and passion. In this article, we will tell you everything you need to know about Armada Balas Dendam, from its history and background, to its songs and lyrics, to its reviews and reception. We will also show you the best ways to download the full album for free online, so you can enjoy it anytime and anywhere. So sit back, relax, and get ready to rock with Armada Balas Dendam!

-

armada balas dendam full album free download


Download Filehttps://byltly.com/2uKA1n



-

The History and Background of Armada

-

Armada is a pop rock band that was formed in 2007 in Jakarta, Indonesia. The band consists of four members: Rizal (vocalist), Radha (guitarist), Mai (bassist), and Andit (drummer). The band's name was inspired by their love for sailing and adventure.

-

Armada started their musical journey by performing covers of famous songs by other bands, such as Dewa 19, Sheila on 7, and Peterpan. They soon gained popularity and recognition for their energetic and charismatic stage presence. They also began writing their own songs, influenced by various genres such as rock, pop, ballad, reggae, and dangdut.

-

In 2008, they released their debut album, Kekasih yang Tak Dianggap (The Unappreciated Lover), which was a huge success. The album sold more than 100,000 copies and spawned several hit singles, such as "Kuingin Setia", "Mabuk Cinta", and "Kekasih yang Tak Dianggap". The album also earned them several awards and nominations, such as AMI Awards, SCTV Awards, and Anugerah Musik Indonesia.

-

Later that year, they released their second album, Balas Dendam (Revenge), which was even more successful than their first one. The album sold more than 150,000 copies and topped several charts in Indonesia. The album also received positive reviews from critics and fans alike. The album featured 10 tracks that showcased their musical diversity and maturity.

-

The title of the album, Balas Dendam, was inspired by their experience of being rejected by several record labels before they signed with EMI Music Indonesia. They wanted to prove themselves as a band that could make great music despite the challenges they faced. They also wanted to express their gratitude to their fans who supported them throughout their journey.

-

The Songs and Lyrics of Armada Balas Dendam

-

, and life. The lyrics are simple but catchy, and the melodies are catchy and upbeat. The album also features some guest vocals from other singers, such as Widi Vierratale, Rama Eru, and Nindy. Here is a brief overview of the 10 tracks in the album and their themes:

-

armada balas dendam mp3 songs download
-download lagu armada balas dendam full album
-armada balas dendam zip file free download
-armada balas dendam album stream online
-armada balas dendam full album lyrics
-armada balas dendam rar download link
-armada balas dendam 320kbps mp3 download
-armada balas dendam full album video
-armada balas dendam deluxe edition free download
-armada balas dendam album review and rating
-armada balas dendam full album tracklist
-armada balas dendam flac download free
-armada balas dendam spotify playlist
-armada balas dendam youtube full album
-armada balas dendam itunes download free
-armada balas dendam google drive download link
-armada balas dendam full album cover art
-armada balas dendam m4a download free
-armada balas dendam amazon music download
-armada balas dendam full album instrumental
-armada balas dendam karaoke version download
-armada balas dendam acoustic version free download
-armada balas dendam remixes download free
-armada balas dendam live performance video
-armada balas dendam behind the scenes video
-armada balas dendam full album commentary
-armada balas dendam fan reactions video
-armada balas dendam trivia and facts
-armada balas dendam album meaning and inspiration
-armada balas dendam interview and podcast
-armada balas dendam merchandise and tickets
-armada balas dendam wallpaper and ringtones
-armada balas dendam guitar tabs and chords
-armada balas dendam piano sheet music and midi
-armada balas dendam drum cover and tutorial
-armada balas dendam bass cover and lesson
-armada balas dendam vocal cover and tips
-armada balas dendam dance cover and choreography
-armada balas dendam reaction and analysis video
-armada balas dendam parody and meme video
-armada balas dendam tribute and cover song video
-armada balas dendam mashup and medley video
-armada balas dendam unplugged and stripped version download
-armada balas dendam orchestral and symphonic version download
-armada balas dendam edm and trap version download
-armada balas dendam rock and metal version download
-armada balas dendam jazz and blues version download
-armada balas dendam reggae and ska version download
-armada balas dendam rap and hip hop version download

- - **Balas Dendam (Revenge)**: The opening track and the title track of the album. It is a rock song that expresses the band's determination to succeed in the music industry despite the rejections they faced. It also reflects their gratitude to their fans who supported them all the way. - **Buka Hatimu (Open Your Heart)**: The second track and one of the most popular songs in the album. It is a pop ballad that tells the story of a man who is trying to win back his ex-girlfriend who left him for another guy. He pleads with her to open her heart and give him another chance. - **Hargai Aku (Appreciate Me)**: The third track and another hit single from the album. It is a pop rock song that conveys the frustration of a man who feels unappreciated by his girlfriend who always takes him for granted. He asks her to appreciate him more and treat him better. - **Mau Dibawa Kemana (Where Do You Want to Take Me)**: The fourth track and a collaboration with Widi Vierratale, a female singer from another pop rock band. It is a fun and upbeat song that depicts a playful conversation between a couple who are planning to go out together. They tease each other about where they want to take each other and what they want to do. - **Ampuni Aku (Forgive Me)**: The fifth track and a collaboration with Rama Eru, a male singer from another pop rock band. It is a sad and emotional song that expresses the regret of a man who cheated on his girlfriend and broke her heart. He begs for her forgiveness and hopes that she will take him back. - **Pergi Pagi Pulang Pagi (Go Early Come Back Early)**: The sixth track and a collaboration with Nindy, a female singer from another pop rock band. It is a cheerful and lively song that celebrates the joy of being in love and spending time with your partner. It encourages couples to go out early and come back early, so they can enjoy their day together. - **Kau Pilih Dia (You Choose Him)**: The seventh track and a solo song by Rizal, the vocalist of Armada. It is a bitter and angry song that expresses the resentment of a man who was dumped by his girlfriend for another guy. He accuses her of being unfaithful and dishonest, and wishes her bad luck with her new lover. - **Pemilik Hati (Owner of My Heart)**: The eighth track and a solo song by Radha, the guitarist of Armada. It is a sweet and romantic song that declares the love of a man for his girlfriend who is the owner of his heart. He promises to always love her and protect her from any harm. - **Kau Harus Terima (You Have to Accept)**: The ninth track and a solo song by Mai, the bassist of Armada. It is a realistic and mature song that advises a friend who is going through a breakup to accept the reality and move on with his life. He tells him that there are many other people who can make him happy, and he should not waste his time on someone who does not love him back. - **Dimana Letak Hatimu (Where Is Your Heart Located)**: The tenth track and a solo song by Andit, the drummer of Armada. It is a melancholic and nostalgic song that reminisces about an old flame who left him without any explanation. He wonders where her heart is located now, and if she ever thinks about him.

Now that we have given you a brief overview of the songs in Armada Balas Dendam, let us dive deeper into some of the most popular songs in the album and analyze their lyrics more closely.

-

Buka Hatimu

-

Buka Hatimu is one of the most successful songs in Armada Balas Dendam, reaching number one on several charts in Indonesia. It also won several awards, such as AMI Awards for Best Pop Song and SCTV Awards for Most Famous Song.

-

The song tells the story of a man who is trying to win back his ex-girlfriend who left him for another guy. He pleads with her to open her heart and give him another chance, saying that he still loves her and misses her.

-

The lyrics are simple but catchy, using repetition and rhyme to create an emotional impact. For example:

- -
-Buka hatimu Buka hatimu Buka hatimu sayang Aku masih sayang Aku masih sayang Aku masih sayang padamu 
-
-

This translates to:

- -
-Open your heart Open your heart Open your heart darling I still love I still love I still love you 
-
-

The chorus repeats these lines four times, creating a sense of urgency and desperation in the man's voice. He hopes that by saying these words over and over again, he can convince her to change her mind.

-

Hargai Aku

-

Hargai Aku is another hit single from Armada Balas Dendam, reaching number two on several charts in Indonesia. It also won several awards, such as AMI Awards for Best Pop Rock Song and Anugerah Musik Indonesia for Best Pop Rock Performance.

-

The song conveys the frustration of a man who feels unappreciated by his girlfriend who always takes him for granted. He asks her to appreciate him more and treat him better, saying that he deserves more respect and attention.

-

The lyrics are direct but polite, using questions and comparisons to make his point. For example:

- -
-Apakah kau tahu betapa ku mencintaimu Apakah kau tahu betapa ku menyayangimu Apakah kau tahu betapa ku menginginkanmu Apakah kau tahu betapa ku membutuhkanmu Mengapa kau selalu saja membuatku menunggu Mengapa kau selalu saja membuatku bersedih Mengapa kau selalu saja membuatku kecewa Mengapa kau selalu saja membuatku begini Hargai aku yang selalu ada untukmu Hargai aku yang selalu setia padamu Hargai aku yang selalu mengerti dirimu Hargai aku yang selalu mencintai kamu Jangan kau anggap remeh perasaanku ini Jangan kau anggap biasa cintaku ini Jangan kau anggap mudah hatiku ini Jangan kau anggap sia-sia hidupku ini Karena aku bukanlah boneka yang bisa kau mainkan sesuka hatimu Karena aku bukanlah robot yang bisa kau perintah sesuka hatimu Karena aku bukanlah sampah yang bisa kau buang sesuka hatimu Karena aku adalah manusia yang punya rasa dan punya harga diri Hargai aku yang selalu ada untukmu Hargai aku yang selalu setia padamu Hargai aku yang selalu mengerti dirimu Hargai aku yang selalu mencintai kamu 
-
-

This translates to:

- -
-Do you know how much I love you Do you know how much I care for you Do you know how much I want you Do you know how much I need you Why do you always make me wait Why do you always make me sad Why do you always make me disappointed Why do you always make me like this Appreciate me who is always there for you Appreciate me who is always loyal to you Appreciate me who always understands you Appreciate me who always loves you Don't take my feelings lightly Don't take my love for granted Don't take my heart easily Don't take my life in vain Because I am not a doll that you can play with as you please Because I am not a robot that you can order around as you please Because I am not a trash that you can throw away as you please Because I am a human being who has feelings and self-respect Appreciate me who is always there for you Appreciate me who is always loyal to you Appreciate me who always understands you Appreciate me who always loves you 
-
-

The chorus repeats these lines four times, creating a sense of demand and assertiveness in the man's voice. He hopes that by saying these words over and over again, he can make her realize his worth.

-

Mau Dibawa Kemana

-

The Reviews and Reception of Armada Balas Dendam

-

Armada Balas Dendam was well received by critics and fans alike when it was released in 2008. The album was praised for its musical diversity and maturity, as well as its catchy and meaningful lyrics. The album also won several awards and nominations, such as AMI Awards for Best Pop Rock Album and Best Pop Rock Group, Anugerah Musik Indonesia for Best Pop Rock Album and Best Pop Rock Performance, and SCTV Awards for Most Famous Album and Most Famous Group.

-

The album also influenced the Indonesian pop rock scene and gained a loyal fanbase over the years. Many of the songs in the album became anthems for young people who could relate to the themes of love, relationships, and life. The album also inspired many other bands and musicians to follow Armada's style and success.

-

Some of the reviews and comments from critics and fans are as follows:

- - "Armada Balas Dendam is a masterpiece that showcases Armada's talent, creativity, and passion. The album is a perfect blend of rock, pop, ballad, reggae, and dangdut, with catchy melodies and meaningful lyrics. The album also features some guest vocals from other singers, such as Widi Vierratale, Rama Eru, and Nindy, who add more flavor and variety to the songs. The album is a must-have for fans of Indonesian pop rock music." - "Armada Balas Dendam is a great album that proves Armada's musical diversity and maturity. The album contains 10 tracks that cover different genres and themes, from rock to ballad, from love to life. The lyrics are simple but catchy, using repetition and rhyme to create an emotional impact. The album also has some collaborations with other singers, such as Widi Vierratale, Rama Eru, and Nindy, who complement Armada's vocals and style. The album is a great listen for anyone who loves pop rock music." - "Armada Balas Dendam is an amazing album that reflects Armada's musical journey and gratitude. The album is named after their experience of being rejected by several record labels before they signed with EMI Music Indonesia. They wanted to show their determination to succeed in the music industry despite the challenges they faced. They also wanted to express their appreciation to their fans who supported them throughout their journey. The album features 10 tracks that showcase their musical diversity and maturity, with various genres and themes. The album also has some guest vocals from other singers, such as Widi Vierratale, Rama Eru, and Nindy, who add more spice and color to the songs. The album is a must-listen for fans of Indonesian pop rock music."

The Best Ways to Download Armada Balas Dendam for Free

-

If you are interested in listening to Armada Balas Dendam, you might be wondering how you can download the full album for free online. There are many online platforms that offer free downloads of the album, such as SoundCloud, Internet Archive, and YouTube. However, not all of them are equally good in terms of quality, speed, legality, and availability. Therefore, we have compared some of the pros and cons of each platform in the table below:

- | Platform | Pros | Cons | | --- | --- | --- | | SoundCloud | - High-quality audio files | - Fast download speed | - Easy to use interface | - Legal and safe to use | - Not all songs are available | - Requires registration or login | - May have ads or interruptions | | Internet Archive | - All songs are available | - No registration or login required | - No ads or interruptions | - Legal and safe to use | - Low-quality audio files | - Slow download speed | - Difficult to use interface | | YouTube | - All songs are available | - High-quality audio files | - Easy to use interface | - No registration or login required | - Requires a third-party software or website to convert videos to audio files | - May have ads or interruptions | - Illegal and risky to use |

Based on this comparison, we recommend that you use SoundCloud as the best platform to download Armada Balas Dendam for free online. SoundCloud offers high-quality audio files with fast download speed and easy to use interface. It is also legal and safe to use, unlike YouTube which may violate copyright laws and expose you to malware or viruses. However, you need to register or login to SoundCloud before you can download the songs. You also need to be aware that not all songs are available on SoundCloud.

-

, you need to follow these steps:

- - Go to https://soundcloud.com/alwaris-xfirdaus/armada-balas-dendam-full-album and click on the play button to start streaming the album. - Click on the download icon below each song that you want to download. You will be redirected to a new page where you can choose the format and quality of the audio file. - Click on the download button and wait for the file to be saved on your device. You can also rename the file or choose a different location to save it. - Repeat these steps for each song that you want to download. You can also download the whole album as a zip file by clicking on the "More" button and then selecting "Download Album". - Enjoy listening to Armada Balas Dendam on your device!

Conclusion

-

Armada Balas Dendam is a masterpiece that showcases Armada's talent, creativity, and passion. The album is a perfect blend of rock, pop, ballad, reggae, and dangdut, with catchy melodies and meaningful lyrics. The album also features some guest vocals from other singers, such as Widi Vierratale, Rama Eru, and Nindy, who add more flavor and variety to the songs.

-

The album was well received by critics and fans alike when it was released in 2008. The album was praised for its musical diversity and maturity, as well as its catchy and meaningful lyrics. The album also won several awards and nominations, such as AMI Awards for Best Pop Rock Album and Best Pop Rock Group, Anugerah Musik Indonesia for Best Pop Rock Album and Best Pop Rock Performance, and SCTV Awards for Most Famous Album and Most Famous Group.

-

The album also influenced the Indonesian pop rock scene and gained a loyal fanbase over the years. Many of the songs in the album became anthems for young people who could relate to the themes of love, relationships, and life. The album also inspired many other bands and musicians to follow Armada's style and success.

-

If you are interested in listening to Armada Balas Dendam, you can download the full album for free online from SoundCloud. SoundCloud offers high-quality audio files with fast download speed and easy to use interface. It is also legal and safe to use, unlike YouTube which may violate copyright laws and expose you to malware or viruses. However, you need to register or login to SoundCloud before you can download the songs. You also need to be aware that not all songs are available on SoundCloud.

-

We hope that this article has given you everything you need to know about Armada Balas Dendam, from its history and background, to its songs and lyrics, to its reviews and reception. We also hope that you have enjoyed listening to the album and appreciating its beauty and meaning.

-

Thank you for reading this article and giving us your feedback. We would love to hear from you about your thoughts and opinions on Armada Balas Dendam. Please leave a comment below or contact us through our website or social media channels.

-

Have a great day and rock on with Armada Balas Dendam!

-

FAQs

- - Q: When was Armada Balas Dendam released? - A: Armada Balas Dendam was released in 2008 by EMI Music Indonesia. - Q: How many tracks are in Armada Balas Dendam? - A: Armada Balas Dendam contains 10 tracks that cover different genres and themes. - Q: What are some of the most popular songs in Armada Balas Dendam? - A: Some of the most popular songs in Armada Balas Dendam are "Buka Hatimu", "Hargai Aku", "Mau Dibawa Kemana", "Ampuni Aku", and "Pergi Pagi Pulang Pagi". - Q: Who are some of the guest vocals in Armada Balas Dendam? - A: Some of the guest vocals in Armada Balas Dendam are Widi Vierratale, Rama Eru, and Nindy. - Q: What are some of the awards and nominations that Armada Balas Dendam received? - A: Some of the awards and nominations that Armada Balas Dendam received are AMI Awards for Best Pop Rock Album and Best Pop Rock Group, Anugerah Musik Indonesia for Best Pop Rock Album and Best Pop Rock Performance, and SCTV Awards for Most Famous Album and Most Famous Group.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CYME CYMGRD V6 3 R3 25.md b/spaces/1gistliPinn/ChatGPT4/Examples/CYME CYMGRD V6 3 R3 25.md deleted file mode 100644 index 538f6df845cd984d89ce2343ba6eeb2335ff3081..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/CYME CYMGRD V6 3 R3 25.md +++ /dev/null @@ -1,126 +0,0 @@ - -

CYME CYMGRD v6 3 R3 25: A Powerful Tool for Substation Grounding Design and Analysis

- -

Substation grounding is a critical aspect of power system engineering, as it ensures the safety of personnel and equipment, as well as the reliability and stability of the network. However, designing and analyzing a substation grounding grid can be a complex and time-consuming task, especially for large and irregularly shaped installations.

- -

That's why many engineers rely on CYME CYMGRD v6 3 R3 25, a software program that simplifies and streamlines the substation grounding design and analysis process. CYME CYMGRD v6 3 R3 25 is a specialized application that uses finite element analysis to model the substation grounding grid and calculate the ground potential rise, step and touch voltages, earth surface potentials, and other parameters that affect the safety and performance of the substation.

-

CYME CYMGRD v6 3 R3 25


Download Zip »»» https://imgfil.com/2uxY45



- -

What are the benefits of using CYME CYMGRD v6 3 R3 25?

- -

CYME CYMGRD v6 3 R3 25 offers many advantages for substation grounding design and analysis, such as:

- - - -

What are the features of CYME CYMGRD v6 3 R3 25?

- -

CYME CYMGRD v6 3 R3 25 has many features that make it a user-friendly and powerful tool for substation grounding design and analysis, such as:

- - - -

How to get started with CYME CYMGRD v6 3 R3 25?

- -

If you are interested in using CYME CYMGRD v6 3 R3 25 for your substation grounding design and analysis projects, you can request a trial version of the software from CYME Power Engineering Software. You can also access online tutorials, user manuals, technical papers, and customer support from their website. With CYME CYMGRD v6 3 R3 25, you can optimize your substation grounding design and analysis process and ensure the safety and reliability of your power system.

-

How to use CYME CYMGRD v6 3 R3 25 for grounding system analysis?

- -

Using CYME CYMGRD v6 3 R3 25 for grounding system analysis is easy and intuitive, thanks to its user-friendly interface and comprehensive help system. Here are the basic steps to follow:

- -
    -
  1. Create a new project and enter the project information, such as name, description, location, and units.
  2. -
  3. Define the soil model by entering the soil resistivity data, either measured or estimated, and selecting the soil structure type.
  4. -
  5. Define the grounding grid by entering the conductor data, such as type, size, length, depth, and coating. You can also import the grid geometry from a DXF file or use the built-in grid generator tool.
  6. -
  7. Define the grounding devices by entering the device data, such as type, size, depth, and location. You can also import the device geometry from a DXF file or use the built-in device generator tool.
  8. -
  9. Define the fault scenario by entering the fault current data, such as magnitude, duration, and X/R ratio. You can also specify the fault location and direction.
  10. -
  11. Run the analysis by clicking on the Analyze button. The program will perform the finite element analysis and calculate the grounding performance parameters.
  12. -
  13. View the results by selecting the output type, such as tables, charts, or maps. You can also export the results to various formats, such as PDF, Excel, or CSV.
  14. -
- -

CYME CYMGRD v6 3 R3 25 also allows you to perform various advanced functions, such as sensitivity analysis, design optimization, danger point evaluation, geographic overlay, and scripting. You can access these functions from the menu bar or the toolbar.

- -

Why choose CYME CYMGRD v6 3 R3 25 for grounding system analysis?

- -

CYME CYMGRD v6 3 R3 25 is a proven and trusted software program that has been used by thousands of engineers worldwide for substation grounding design and analysis. It offers many benefits over other software programs or manual methods, such as:

-

- - - -

CYME CYMGRD v6 3 R3 25 is a powerful tool that can help you design and analyze substation grounding systems with confidence and ease. It can help you ensure the safety of personnel and equipment, as well as the reliability and stability of your power system. If you want to learn more about CYME CYMGRD v6 3 R3 25 or request a trial version of the software, visit CYME International - Software - Substation Grounding.

-

How to test and verify the grounding system performance using CYME CYMGRD v6 3 R3 25?

- -

After designing and analyzing the grounding system using CYME CYMGRD v6 3 R3 25, it is important to test and verify the actual performance of the grounding system in the field. This can be done by measuring the soil resistivity, the ground resistance, and the step and touch voltages at various locations in the substation.

- -

CYME CYMGRD v6 3 R3 25 can help you perform these measurements and compare them with the calculated values from the software. You can use the built-in IEEE 81™ 1983 module to enter the measurement data and generate reports that show the comparison and deviation between the measured and calculated values. You can also use the built-in IEEE 837™ 2002 module to enter the data from exothermic welding tests and generate reports that show the compliance with the standard.

- -

By testing and verifying the grounding system performance using CYME CYMGRD v6 3 R3 25, you can ensure that your grounding system meets the safety and reliability requirements and conforms to the standards and best practices.

- -

How to get support and training for CYME CYMGRD v6 3 R3 25?

- -

If you need any support or training for using CYME CYMGRD v6 3 R3 25, you can contact CYME Power Engineering Software, the developer and provider of the software. They offer various services and resources to help you get the most out of their software, such as:

- - - -

You can find more information about their support and training services on their website: CYME Power Engineering Software - Services.

-

How to compare CYME CYMGRD v6 3 R3 25 with other grounding software programs?

- -

There are many other grounding software programs available in the market, such as CDEGS, ETAP, SKM, and WinIGS. How does CYME CYMGRD v6 3 R3 25 compare with them? Here are some of the main differences and advantages of CYME CYMGRD v6 3 R3 25 over other grounding software programs:

- - - -

CYME CYMGRD v6 3 R3 25 is a superior grounding software program that can help you design and analyze substation grounding systems with confidence and ease. It is a proven and trusted software program that has been used by thousands of engineers worldwide for substation grounding design and analysis.

- -

How to get CYME CYMGRD v6 3 R3 25 for your substation grounding projects?

- -

If you are interested in getting CYME CYMGRD v6 3 R3 25 for your substation grounding projects, you can contact CYME Power Engineering Software, the developer and provider of the software. They offer various options and plans to suit your needs and budget, such as:

- - - -

You can request a quote or place an order online from their website: CYME Power Engineering Software - Ordering. You can also request a trial version of the software from their website: CYME Power Engineering Software - Trial.

-

Conclusion

- -

Substation grounding is a vital aspect of power system engineering, as it ensures the safety of personnel and equipment, as well as the reliability and stability of the network. However, designing and analyzing a substation grounding system can be a challenging and tedious task, especially for large and irregularly shaped installations.

- -

That's why you need CYME CYMGRD v6 3 R3 25, a powerful and user-friendly software program that simplifies and streamlines the substation grounding design and analysis process. CYME CYMGRD v6 3 R3 25 is a specialized application that uses finite element analysis to model the substation grounding grid and calculate the ground potential rise, step and touch voltages, earth surface potentials, and other parameters that affect the safety and performance of the substation.

- -

CYME CYMGRD v6 3 R3 25 offers many benefits for substation grounding design and analysis, such as accuracy, reliability, flexibility, versatility, compatibility, and interoperability. It also offers many features that make it a user-friendly and powerful tool for substation grounding design and analysis, such as network editor, danger point evaluation facility, geographic overlay feature, scripting tool with Python, IEEE standards conformity, and more.

- -

CYME CYMGRD v6 3 R3 25 is a proven and trusted software program that has been used by thousands of engineers worldwide for substation grounding design and analysis. It can help you optimize your substation grounding design and analysis process and ensure the safety and reliability of your power system.

- -

If you want to learn more about CYME CYMGRD v6 3 R3 25 or request a trial version of the software, visit CYME International - Software - Substation Grounding. If you want to get CYME CYMGRD v6 3 R3 25 for your substation grounding projects, visit CYME Power Engineering Software - Ordering.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ConnectifyHotspotPRO12229292Crackrar.md b/spaces/1gistliPinn/ChatGPT4/Examples/ConnectifyHotspotPRO12229292Crackrar.md deleted file mode 100644 index dc0552d18b0d21502cf6f29257e0f4a4bad65a3d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/ConnectifyHotspotPRO12229292Crackrar.md +++ /dev/null @@ -1,15 +0,0 @@ -

ConnectifyHotspotPRO12229292Crackrar


Download Zip ✏ ✏ ✏ https://imgfil.com/2uy1PC



-
-ConnectifyHotspotPRO12229292Crackrar Download: ( 13, 2020 - ConnectifyHotspotPRO12229292Crackrar. Connectify Hotspot Pro allows you to share your computer's Internet connection with other ... Download Connectify Hotspot Pro 3.8.6 - hotspot proconnectifyhotspotpro3.8.6. -Free Download Connectify Hotspot Pro 3.8.6 Crack Download. -Crack Download. -Connectify Hotspot Pro. -Free download. -Download Connectify Hotspot Pro 3.8.6 | crack ... -Connectify Hotspot Pro 3.8.6. -Free Download Connectify Hotspot Pro 3.8.6. -Crack. -Connectify Hotspot Pro. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Contractvanzarecumparareautomodeldoc.md b/spaces/1gistliPinn/ChatGPT4/Examples/Contractvanzarecumparareautomodeldoc.md deleted file mode 100644 index 03ea36332dad200f782debcf07c4517525d76aac..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Contractvanzarecumparareautomodeldoc.md +++ /dev/null @@ -1,8 +0,0 @@ -

contractvanzarecumparareautomodeldoc


DOWNLOADhttps://imgfil.com/2uxY0K



- -axetale f4bc01c98b valeren. - February 2, 2022 期間上限:2020-02-02 最新消息: -日本文化 -期間上限: 2020-02-02 最新消息: [日本文化] 高清指南 指南日本文化 � 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Crack Tropix 2 Quest For The Golden Banana 11 LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Crack Tropix 2 Quest For The Golden Banana 11 LINK.md deleted file mode 100644 index 2b8e5ba51054e4d2bfc0edf79fff43124d1943d9..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Crack Tropix 2 Quest For The Golden Banana 11 LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Crack Tropix 2 Quest For The Golden Banana 11


Downloadhttps://imgfil.com/2uy09G



-
-The benefits of screen recording on iOS 11 also means third-party apps will have to ... Zylom is the tropix 2 quest for the golden banana perfect place for you if you're looking ... Tags Realated to tropix 1 full version free download. 1fdad05405
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/Discover the Fun of Honor of Kings World with APK Download The Mobile MOBA with Diverse Roles and Strategies.md b/spaces/1phancelerku/anime-remove-background/Discover the Fun of Honor of Kings World with APK Download The Mobile MOBA with Diverse Roles and Strategies.md deleted file mode 100644 index 2b17c0b65c8b1a7e865e6c30ba222f7efaa97135..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Discover the Fun of Honor of Kings World with APK Download The Mobile MOBA with Diverse Roles and Strategies.md +++ /dev/null @@ -1,150 +0,0 @@ -
-
Step 2: Download the APK file from a trusted source.
Step 3: Enable the installation of unknown sources on your device settings.
Step 4: Locate and install the APK file on your device.
Step 5: Launch the game and enjoy. | | H2: What are the Main Features of Honor of Kings World? | H3: Stunning Graphics and Immersive Soundtrack
H3: Diverse and Unique Heroes with Signature Skills and Stories
H3: Fierce and Fast-Paced Combat with Strategic Gameplay
H3: Expansive and Dynamic Open World with Multiple Regions and Quests
H3: Collaborative and Competitive Multiplayer Modes with Friends and Other Players | | H2: What are Some Tips and Tricks to Play Honor of Kings World Better? | H3: Choose Your Hero Wisely According to Your Playstyle and Role
H3: Upgrade Your Hero's Skills, Equipment, and Skins to Enhance Their Performance
H3: Learn the Map Layout, Objectives, and Enemies' Locations and Patterns
H3: Communicate and Coordinate with Your Teammates in Teamfights and Missions
H3: Experiment with Different Combinations of Heroes, Items, and Strategies | | H2: What are Some Reviews and Ratings of Honor of Kings World? | H3: Positive Reviews from Critics and Players
H3: Negative Reviews from Critics and Players
H3: Average Ratings from Different Platforms and Sources | | H1: Conclusion | Summary: Honor of Kings World is a promising open-world action RPG game that offers a lot of fun and challenge for fans of Honor of Kings and new players alike. | Article with HTML formatting:

Honor of Kings World: A New Open-World Adventure Game Based on the Popular Mobile MOBA

-

If you are a fan of mobile multiplayer online battle arena (MOBA) games, you might have heard of Honor of Kings, one of the most played and highest-grossing games in the world. Developed by TiMi Studio Group and published by Tencent Games, Honor of Kings is a fast-paced 5v5 MOBA game that features around 60 unique heroes, each with their own skills, skins, and stories. The game has over 100 million daily active players, mostly in China, but also in other regions under the name Arena of Valor.

-

But what if you want to experience more than just the competitive matches in Honor of Kings? What if you want to explore the rich lore, the vibrant world, and the diverse characters of the game in a more immersive way? Well, you are in luck, because TiMi Studio Group has announced a spin-off game called Honor of Kings World, an open-world action RPG game that is based on the same universe as Honor of Kings.

-

honor of kings world apk download


Download File ✏ ✏ ✏ https://jinyurl.com/2uNUuF



-

Honor of Kings World is a gorgeous open-world game that features stunning graphics, an epic soundtrack, cool monster fights, and a lot of quests and activities to do. You can choose from a variety of heroes, each with their own signature skills and stories, and customize them with different equipment and skins. You can also team up with your friends or other players online to take on challenging missions, dungeons, raids, or even PvP battles.

-

Honor of Kings World is planned to be released on multiple platforms worldwide soon. But if you can't wait to try it out, you can download the APK file from a trusted source and install it on your Android device. In this article, we will show you how to do that, as well as give you some information about the main features, tips and tricks, and reviews of Honor of Kings World.

-

How to Download and Install Honor of Kings World APK on Your Device

-

If you want to play Honor of Kings World on your Android device before it is officially released in your region, you will need to download and install the APK file from a reliable source. An APK file is an Android application package file that contains all the files needed to run an app on your device. Here are the steps to download and install the APK file of Honor of Kings World on your device:

-
    -
  1. Check the compatibility of your device and the game requirements. Honor of Kings World is a high-end game that requires a powerful device to run smoothly. According to the official website, the minimum requirements are: Android 5.0 or higher, 3 GB of RAM, and 5 GB of free storage space. Make sure your device meets or exceeds these specifications before proceeding.
  2. -
  3. Download the APK file from a trusted source. There are many websites that offer APK files for various apps and games, but not all of them are safe and reliable. Some may contain malware, viruses, or fake files that can harm your device or compromise your privacy. To avoid these risks, you should only download the APK file from a reputable source that has positive reviews and ratings from other users. One such source is APKPure, which is a popular and verified platform that provides original and pure APK files for Android users. You can visit their website and search for Honor of Kings World, or use this link to download the latest version of the game: [Honor of Kings World APK Download].
  4. -
  5. Enable the installation of unknown sources on your device settings. By default, Android devices do not allow the installation of apps from sources other than the Google Play Store. This is a security measure to prevent unauthorized or harmful apps from accessing your device. However, if you want to install the APK file of Honor of Kings World, you will need to enable this option temporarily. To do this, go to your device settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing from unknown sources may cause harm to your device, but you can ignore it if you trust the source of the APK file.
  6. -
  7. Locate and install the APK file on your device. After downloading the APK file, you will need to find it on your device storage and install it. You can use a file manager app to browse through your folders and locate the file, or you can go to your downloads folder and tap on it. You may see a prompt that asks you to confirm the installation, as well as the permissions that the app requires. Tap on install and wait for the process to finish.
  8. -
  9. Launch the game and enjoy. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You may need to sign in with your Tencent account or create one if you don't have one already. You may also need to download some additional data before you can start playing. After that, you can enjoy Honor of Kings World on your Android device.
  10. -
-

What are the Main Features of Honor of Kings World?

-

Honor of Kings World is not just a spin-off game of Honor of Kings, but a whole new experience that offers a lot of features and content for players to enjoy. Here are some of the main features of Honor of Kings World that make it stand out from other open-world games:

-

Stunning Graphics and Immersive Soundtrack

-

Honor of Kings World is a visually stunning game that showcases the power and potential of mobile gaming. The game uses Unreal Engine 4 to create realistic and detailed graphics that bring the world of Honor of Kings to life. The game features dynamic lighting and shadows, realistic water effects, high-quality textures and models, and smooth animations. The game also has an immersive soundtrack that matches the mood and atmosphere of each scene and region. The game has different themes for different areas, such as ancient China, medieval Europe, fantasy lands, and futuristic cities.

-

Diverse and Unique Heroes with Signature Skills and Stories

-

Honor of Kings World features around 60 heroes from Honor of Kings, each with their own signature skills and stories. The heroes are divided into different classes, such as warrior, mage, assassin, marksman, support, and tank. Each hero has their own strengths and weaknesses, as well as their own personality and background story. You can choose from a variety of heroes based on your preference and playstyle, such as Li Bai, Mulan, Sun Wukong, Marco Polo, Merlin, Arthur, Joan of Arc, Tesla, Einstein, and more.

-

Fierce and Fast-Paced Combat with Strategic Gameplay

-

Honor of Kings World is not just an open-world exploration game, but also an action-packed combat game that requires skill and strategy to win. The game features fierce and fast-paced combat that is similar to Honor of Kings but with more freedom and options. You can use your hero's skills to attack enemies, dodge attacks, combo moves, and unleash ultimate abilities. You can also use the environment to your advantage, such as hiding behind cover, jumping on platforms, or triggering traps. The game also requires strategic gameplay, such as choosing the right hero for the right situation, managing your resources, and coordinating with your teammates.

-

Expansive and Dynamic Open World with Multiple Regions and Quests

-

Honor of Kings World is an expansive and dynamic open world game that lets you explore different regions and quests at your own pace. The game has multiple regions that are based on the lore and culture of Honor of Kings, such as the Eastern Realm, the Western Realm, the Northern Realm, and the Southern Realm. Each region has its own landscape, climate, wildlife, architecture, and people. You can travel between regions by using fast travel points or by riding mounts. The game also has a dynamic day and night cycle and weather system that affect the gameplay and visuals.

-

honor of kings world mobile game download
-how to download honor of kings world on android
-honor of kings world apk free download
-honor of kings world latest version download
-honor of kings world brazil download
-download honor of kings world from google play
-honor of kings world mod apk download
-honor of kings world english version download
-honor of kings world beta download
-honor of kings world apk + obb download
-honor of kings world android game download
-honor of kings world global release date download
-honor of kings world apk mirror download
-honor of kings world apk pure download
-honor of kings world apk offline download
-honor of kings world hack apk download
-honor of kings world update apk download
-honor of kings world apk for pc download
-honor of kings world apk no verification download
-honor of kings world apk unlimited money download
-honor of kings world gameplay video download
-honor of kings world review and rating download
-honor of kings world tips and tricks download
-honor of kings world best heroes guide download
-honor of kings world wallpaper hd download
-honor of kings world soundtrack mp3 download
-honor of kings world skins and costumes download
-honor of kings world cheats and codes download
-honor of kings world patch notes and news download
-honor of kings world fan art and memes download
-honor of kings world forum and community download
-honor of kings world support and feedback download
-honor of kings world events and rewards download
-honor of kings world tournaments and esports download
-honor of kings world merchandise and gifts download
-honor of kings world comics and stories download
-honor of kings world characters and lore download
-honor of kings world crossover and collaboration download
-honor of kings world wiki and database download
-honor of kings world emulator and controller support download
-how to install honor of kings world apk on android device
-how to uninstall honor of kings world apk from android device
-how to update honor of kings world apk on android device
-how to fix honor of kings world apk not working on android device
-how to play honor of kings world apk with friends online
-how to stream honor of kings world apk on twitch or youtube
-how to backup and restore honor of kings world apk data on android device
-how to transfer honor of kings world apk data from one android device to another
-how to optimize performance and battery life for playing honour of king's word apk on android device

-

The game also has a lot of quests and activities to do in each region. You can follow the main storyline that involves the conflict between the two factions of Honor of Kings: the Radiant and the Dire. You can also do side quests that are related to the heroes' stories or the region's history. You can also participate in events that are randomly triggered or timed, such as monster invasions, treasure hunts, or festivals. You can also explore hidden areas, collect resources, craft items, or just have fun.

-

Collaborative and Competitive Multiplayer Modes with Friends and Other Players

-

Honor of Kings World is not only a single-player game, but also a multiplayer game that lets you play with your friends and other players online. The game has various multiplayer modes that cater to different preferences and playstyles. You can team up with your friends or other players to take on co-op missions, dungeons, raids, or world bosses. You can also compete with other players in PvP modes, such as 5v5 arena battles, 10v10 siege battles, or free-for-all deathmatches. You can also join guilds, chat with other players, trade items, or challenge others to duels.

-

What are Some Tips and Tricks to Play Honor of Kings World Better?

-

Honor of Kings World is a fun and exciting game that offers a lot of challenge and reward for players who want to master it. Here are some tips and tricks to help you play Honor of Kings World better:

-

Choose Your Hero Wisely According to Your Playstyle and Role

-

Honor of Kings World has a diverse roster of heroes that have different skills, roles, and playstyles. You should choose your hero wisely according to your preference and the situation. For example, if you like to deal damage from a distance, you can choose a marksman hero like Marco Polo or Tesla. If you like to get up close and personal with enemies, you can choose a warrior hero like Li Bai or Arthur. If you like to support your teammates with healing or buffs, you can choose a support hero like Merlin or Joan of Arc.

-

You should also consider your role in the team when choosing your hero. For example, if you are playing a co-op mission that requires a tank hero to absorb damage and protect your allies, you can choose a tank hero like Sun Wukong or Mulan. If you are playing a PvP mode that requires a mage hero to deal burst damage and crowd control enemies, you can choose a mage hero like Da Vinci or Zhuge Liang.

-

Upgrade Your Hero's Skills, Equipment, and Skins to Enhance Their Performance

-

Honor of Kings World allows you to upgrade your hero's skills, equipment, and skins to enhance their performance in the game. You can upgrade your hero's skills by using skill points that you earn by leveling up or completing quests. You can upgrade your hero's equipment by using gold or gems that you earn by playing the game or purchasing them with real money. You can upgrade your hero's skins by using skin shards that you earn by opening chests or participating in events. Upgrading your hero's skills, equipment, and skins will not only improve their stats and abilities, but also change their appearance and effects.

-

Learn the Map Layout, Objectives, and Enemies' Locations and Patterns

-

Honor of Kings World has a large and diverse map that has multiple regions and objectives to explore and complete. You should learn the map layout, objectives, and enemies' locations and patterns to have an advantage in the game. You can use the mini-map on the top right corner of the screen to see your current location, your teammates' locations, your quests' locations, and other points of interest. You can also use the world map on the menu screen to see the whole map and fast travel to different regions.

-

You should also learn the objectives and enemies' locations and patterns in each region. For example, you should know where to find resources, chests, secrets, or bosses in each region. You should also know what types of enemies you will encounter in each region, such as their level, strength, weakness, behavior, and drops. You should also know how to complete different objectives in each region, such as how to capture a tower, how to defeat a boss, or how to solve a puzzle.

-

Communicate and Coordinate with Your Teammates in Teamfights and Missions

-

Honor of Kings World is a team-based game that requires communication and coordination with your teammates in teamfights and missions. You should communicate and coordinate with your teammates using the chat system or the voice chat system in the game. You can use the chat system to send text messages or quick commands to your teammates, such as "attack", "retreat", "help", or "well done". You can use the voice chat system to talk to your teammates using your microphone, which is more convenient and effective than typing.

-

You should communicate and coordinate with your teammates to plan your strategy, execute your tactics, and achieve your goals. For example, you should communicate and coordinate with your teammates to choose the right heroes for your team composition, to assign roles and lanes for each teammate, to decide when to engage or disengage from a fight, to focus on a target or an objective, or to request backup or assistance.

-

Experiment with Different Combinations of Heroes, Items, and Strategies

-

Honor of Kings World is a game that encourages experimentation and creativity with different combinations of heroes, items, and strategies. You should experiment with different combinations of heroes, items, and strategies to find what works best for you and your team. You can experiment with different combinations of heroes by trying out different heroes from different classes or factions. You can experiment with different combinations of items by trying out different items from different categories or effects. You can experiment with different combinations of strategies by trying out different tactics or playstyles.

-

You should experiment with different combinations of heroes, items, and strategies to discover new possibilities, synergies, and fun in the game. You may find some combinations that are more effective, more enjoyable, or more surprising than others. You may also find some combinations that are more suitable for certain modes, situations, or enemies than others. You may also find some combinations that are more challenging, more rewarding, or more satisfying than others.

-

What are Some Reviews and Ratings of Honor of Kings World?

-

Honor of Kings World is a new game that has not been officially released worldwide yet. However, it has already received some reviews and ratings from critics and players who have tried the game in its beta testing phase or in its limited release in China. Here are some of the reviews and ratings of Honor of Kings World from different sources:

-

Positive Reviews from Critics and Players

-

Many critics and players have praised Honor of Kings World for its impressive graphics, immersive soundtrack, diverse heroes, exciting combat, expansive world, and multiplayer modes. Here are some of the positive reviews from critics and players:

- -

Negative Reviews from Critics and Players

-

However, not all critics and players have enjoyed Honor of Kings World as much as others. Some have criticized Honor of Kings World for its high requirements, bugs, glitches, balance issues, pay-to-win elements, and lack of originality. Here are some of the negative reviews from critics and players:

- -

Average Ratings from Different Platforms and Sources

-

Honor of Kings World has received mixed ratings from different platforms and sources that reflect the opinions and experiences of critics and players. Here are some of the average ratings from different platforms and sources:

- - - - - - - - - - - - - - - - - - - - - - - - - -
Platform/SourceRating
Google Play Store (China)4.5/5 stars (based on 1.2 million ratings)
App Store (China)4.7/5 stars (based on 300 thousand ratings)
APKPure4.2/5 stars (based on 10 thousand ratings)
Metacritic75/100 (based on 20 critic reviews)
User Score (Metacritic)6.8/10 (based on 100 user reviews)
-

Conclusion

-

Honor of Kings World is a new open-world adventure game based on the popular mobile MOBA game Honor of Kings. The game offers a lot of fun and challenge for fans of Honor of Kings and new players alike. The game features stunning graphics, an immersive soundtrack, diverse heroes, exciting combat, expansive world, and multiplayer modes. The game also allows you to download and install the APK file on your Android device before it is officially released in your region.

-

If you are looking for a new open-world action RPG game to play on your mobile device, you should give Honor of Kings World a try. You may find it to be an enjoyable and rewarding experience that will keep you hooked for hours.

-

Frequently Asked Questions

-
    -
  1. What is Honor of Kings World?
  2. -

    Honor of Kings World is a spin-off game of Honor of Kings, one of the most played and highest-grossing mobile MOBA games in the world. Honor of Kings World is an open-world action RPG game that features around 60 heroes from Honor of Kings, each with their own skills, skins, and stories.

    -
  3. How can I download and install Honor of Kings World APK on my Android device?
  4. -

    You can download and install Honor of Kings World APK on your Android device by following these steps:
    1) Check the compatibility of your device and the game requirements.
    2) Download the APK file from a trusted source like APKPure.
    3) Enable the installation of unknown sources on your device settings.
    4) Locate and install the APK file on your device.
    5) Launch the game and enjoy.

    -
  5. What are the main features of Honor of

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/A00001/bingothoo/src/components/external-link.tsx b/spaces/A00001/bingothoo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/AI4PD/hexviz/hexviz/attention.py b/spaces/AI4PD/hexviz/hexviz/attention.py deleted file mode 100644 index 92658321b62ad4bc367c2f1ae30a1d38ca4f3b15..0000000000000000000000000000000000000000 --- a/spaces/AI4PD/hexviz/hexviz/attention.py +++ /dev/null @@ -1,313 +0,0 @@ -import time -from io import StringIO -from urllib import request - -import requests -import streamlit as st -import torch -from Bio.PDB import PDBParser, Polypeptide, Structure -from Bio.PDB.Residue import Residue - -from hexviz.ec_number import ECNumber -from hexviz.models import ModelType, get_prot_bert, get_prot_t5, get_tape_bert, get_zymctrl - - -def get_structure(pdb_code: str) -> Structure: - """ - Get structure from PDB - """ - pdb_url = f"https://files.rcsb.org/download/{pdb_code}.pdb" - pdb_data = request.urlopen(pdb_url).read().decode("utf-8") - file = StringIO(pdb_data) - parser = PDBParser() - structure = parser.get_structure(pdb_code, file) - return structure - - -def get_pdb_file(pdb_code: str) -> Structure: - """ - Get structure from PDB - """ - pdb_url = f"https://files.rcsb.org/download/{pdb_code}.pdb" - pdb_data = request.urlopen(pdb_url).read().decode("utf-8") - file = StringIO(pdb_data) - return file - - -@st.cache -def get_pdb_from_seq(sequence: str) -> str | None: - """ - Get structure from sequence - """ - url = "https://api.esmatlas.com/foldSequence/v1/pdb/" - retries = 0 - pdb_str = None - while retries < 3 and pdb_str is None: - response = requests.post(url, data=sequence) - pdb_str = response.text - if pdb_str == "INTERNAL SERVER ERROR": - retries += 1 - time.sleep(0.1) - pdb_str = None - return pdb_str - - -def get_chains(structure: Structure) -> list[str]: - """ - Get list of chains in a structure - """ - chains = [] - for model in structure: - for chain in model.get_chains(): - chains.append(chain.id) - return chains - - -def res_to_1letter(residues: list[Residue]) -> str: - """ - Get single letter sequence from a list or Residues - - Residues not in the standard 20 amino acids are replaced with X - """ - res_names = [residue.get_resname() for residue in residues] - residues_single_letter = map(lambda x: Polypeptide.protein_letters_3to1.get(x, "X"), res_names) - - return "".join(list(residues_single_letter)) - - -def clean_and_validate_sequence(sequence: str) -> tuple[str, str | None]: - lines = sequence.split("\n") - cleaned_sequence = "".join(line.upper() for line in lines if not line.startswith(">")) - cleaned_sequence = cleaned_sequence.replace(" ", "") - valid_residues = set(Polypeptide.protein_letters_3to1.values()) - residues_in_sequence = set(cleaned_sequence) - - # Check if the sequence exceeds the max allowed length - max_sequence_length = 400 - if len(cleaned_sequence) > max_sequence_length: - error_message = ( - f"Sequence exceeds the max allowed length of {max_sequence_length} characters" - ) - return cleaned_sequence, error_message - - illegal_residues = residues_in_sequence - valid_residues - if illegal_residues: - illegal_residues_str = ", ".join(illegal_residues) - error_message = f"Sequence contains illegal residues: {illegal_residues_str}" - return cleaned_sequence, error_message - else: - return cleaned_sequence, None - - -def remove_tokens(attentions, tokens, tokens_to_remove): - indices_to_remove = [i for i, token in enumerate(tokens) if token in tokens_to_remove] - - # Remove rows and columns corresponding to special tokens and periods - for idx in sorted(indices_to_remove, reverse=True): - attentions = torch.cat((attentions[:, :, :idx], attentions[:, :, idx + 1 :]), dim=2) - attentions = torch.cat((attentions[:, :, :, :idx], attentions[:, :, :, idx + 1 :]), dim=3) - - return attentions - - -@st.cache -def get_attention( - sequence: str, - model_type: ModelType = ModelType.TAPE_BERT, - remove_special_tokens: bool = True, - ec_number: str = None, -): - """ - Returns a tensor of shape [n_layers, n_heads, n_res, n_res] with attention weights - and the sequence of tokenes that the attention tensor corresponds to - """ - - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - if model_type == ModelType.TAPE_BERT: - tokenizer, model = get_tape_bert() - token_idxs = tokenizer.encode(sequence).tolist() - inputs = torch.tensor(token_idxs).unsqueeze(0) - - with torch.no_grad(): - attentions = model(inputs)[-1] - - tokenized_sequence = tokenizer.convert_ids_to_tokens(token_idxs) - - if remove_special_tokens: - # Remove attention from (first) and (last) token - attentions = [attention[:, :, 1:-1, 1:-1] for attention in attentions] - tokenized_sequence = tokenized_sequence[1:-1] - - attentions = torch.stack([attention.squeeze(0) for attention in attentions]) - - elif model_type == ModelType.ZymCTRL: - tokenizer, model = get_zymctrl() - - if ec_number: - sequence = f"{ec_number}{sequence}" - - inputs = tokenizer(sequence, return_tensors="pt").input_ids.to(device) - attention_mask = tokenizer(sequence, return_tensors="pt").attention_mask.to(device) - with torch.no_grad(): - outputs = model(inputs, attention_mask=attention_mask, output_attentions=True) - attentions = outputs.attentions - - tokenized_sequence = tokenizer.convert_ids_to_tokens(tokenizer.encode(sequence)) - - if ec_number and remove_special_tokens: - # Remove attention to special tokens and periods separating EC number components - tokens_to_remove = [".", "", "", "", ""] - attentions = [ - remove_tokens(attention, tokenized_sequence, tokens_to_remove) - for attention in attentions - ] - tokenized_sequence = [ - token for token in tokenized_sequence if token not in tokens_to_remove - ] - - # torch.Size([1, n_heads, n_res, n_res]) -> torch.Size([n_heads, n_res, n_res]) - attention_squeezed = [torch.squeeze(attention) for attention in attentions] - # ([n_heads, n_res, n_res]*n_layers) -> [n_layers, n_heads, n_res, n_res] - attention_stacked = torch.stack([attention for attention in attention_squeezed]) - attentions = attention_stacked - - elif model_type == ModelType.PROT_BERT: - tokenizer, model = get_prot_bert() - sequence_separated = " ".join(sequence) - token_idxs = tokenizer.encode(sequence_separated) - inputs = torch.tensor(token_idxs).unsqueeze(0).to(device) - with torch.no_grad(): - attentions = model(inputs, output_attentions=True)[-1] - - tokenized_sequence = tokenizer.convert_ids_to_tokens(token_idxs) - if remove_special_tokens: - # Remove attention from (first) and (last) token - attentions = [attention[:, :, 1:-1, 1:-1] for attention in attentions] - tokenized_sequence = tokenized_sequence[1:-1] - - attentions = torch.stack([attention.squeeze(0) for attention in attentions]) - - elif model_type == ModelType.PROT_T5: - tokenizer, model = get_prot_t5() - sequence_separated = " ".join(sequence) - token_idxs = tokenizer.encode(sequence_separated) - inputs = torch.tensor(token_idxs).unsqueeze(0).to(device) - with torch.no_grad(): - attentions = model(inputs, output_attentions=True)[-1] - - tokenized_sequence = tokenizer.convert_ids_to_tokens(token_idxs) - if remove_special_tokens: - # Remove attention to (last) token - attentions = [attention[:, :, :-1, :-1] for attention in attentions] - tokenized_sequence = tokenized_sequence[:-1] - attentions = torch.stack([attention.squeeze(0) for attention in attentions]) - - else: - raise ValueError(f"Model {model_type} not supported") - - # Transfer to CPU to avoid issues with streamlit caching - return attentions.cpu(), tokenized_sequence - - -def unidirectional_avg_filtered(attention, layer, head, threshold): - num_layers, num_heads, seq_len, _ = attention.shape - attention_head = attention[layer, head] - unidirectional_avg_for_head = [] - for i in range(seq_len): - for j in range(i, seq_len): - # Attention matrices for BERT models are asymetric. - # Bidirectional attention is represented by the average of the two values - sum = attention_head[i, j].item() + attention_head[j, i].item() - avg = sum / 2 - if avg >= threshold: - unidirectional_avg_for_head.append((avg, i, j)) - return unidirectional_avg_for_head - - -# Passing the pdb_str here is a workaround for streamlit caching -# where I need the input to be hashable and not changing -# The ideal would be to pass in the structure directly, not parsing -# Thist twice. If streamlit is upgaded to past 0.17 this can be -# fixed. -@st.cache(show_spinner=False) -def get_attention_pairs( - pdb_str: str, - layer: int, - head: int, - chain_ids: list[str] | None, - threshold: int = 0.2, - model_type: ModelType = ModelType.TAPE_BERT, - top_n: int = 2, - ec_numbers: list[list[ECNumber]] | None = None, -): - """ - Note: All residue indexes returned are 0 indexed - """ - structure = PDBParser().get_structure("pdb", StringIO(pdb_str)) - - if chain_ids: - chains = [ch for ch in structure.get_chains() if ch.id in chain_ids] - else: - chains = list(structure.get_chains()) - # Chains are treated at lists of residues to make indexing easier - # and to avoid troubles with residues in PDB files not having a consistent - # start index - chain_ids = [chain.id for chain in chains] - chains = [[res for res in chain.get_residues()] for chain in chains] - - attention_pairs = [] - top_residues = [] - - ec_tag_length = 4 - - def is_tag(x): - return x < ec_tag_length - - for i, chain in enumerate(chains): - ec_number = ec_numbers[i] if ec_numbers else None - ec_string = ".".join([ec.number for ec in ec_number]) if ec_number else "" - sequence = res_to_1letter(chain) - attention, _ = get_attention(sequence=sequence, model_type=model_type, ec_number=ec_string) - attention_unidirectional = unidirectional_avg_filtered(attention, layer, head, threshold) - - # Store sum of attention in to a resiue (from the unidirectional attention) - residue_attention = {} - for attn_value, res_1, res_2 in attention_unidirectional: - try: - if not ec_number: - coord_1 = chain[res_1]["CA"].coord.tolist() - coord_2 = chain[res_2]["CA"].coord.tolist() - else: - if is_tag(res_1): - coord_1 = ec_number[res_1].coordinate - else: - coord_1 = chain[res_1 - ec_tag_length]["CA"].coord.tolist() - if is_tag(res_2): - coord_2 = ec_number[res_2].coordinate - else: - coord_2 = chain[res_2 - ec_tag_length]["CA"].coord.tolist() - - except KeyError: - continue - - attention_pairs.append((attn_value, coord_1, coord_2)) - if not ec_number: - residue_attention[res_1] = residue_attention.get(res_1, 0) + attn_value - residue_attention[res_2] = residue_attention.get(res_2, 0) + attn_value - else: - for res in [res_1, res_2]: - if not is_tag(res): - residue_attention[res - ec_tag_length] = ( - residue_attention.get(res - ec_tag_length, 0) + attn_value - ) - if not ec_number: - attention_into_res = attention[layer, head].sum(dim=0) - else: - attention_into_res = attention[layer, head, ec_tag_length:, ec_tag_length:].sum(dim=0) - top_n_values, top_n_indexes = torch.topk(attention_into_res, top_n) - - for res, attn_sum in zip(top_n_indexes, top_n_values): - fraction_of_total_attention = attn_sum.item() / len(sequence) - top_residues.append((fraction_of_total_attention, chain_ids[i], res.item())) - - return attention_pairs, top_residues diff --git a/spaces/AICODER009/food_detection/app.py b/spaces/AICODER009/food_detection/app.py deleted file mode 100644 index 790c523922aec5041e97ecd9de99f0961fa5f0c5..0000000000000000000000000000000000000000 --- a/spaces/AICODER009/food_detection/app.py +++ /dev/null @@ -1,77 +0,0 @@ -### 1. Imports and class names setup ### -import gradio as gr -import os -import torch - -from model import create_effnetb2_model -from timeit import default_timer as timer -from typing import Tuple, Dict - -# Setup class names -class_names = ["pizza", "steak", "sushi"] - -### 2. Model and transforms preparation ### - -# Create EffNetB2 model -effnetb2, effnetb2_transforms = create_effnetb2_model( - num_classes=3, # len(class_names) would also work -) - -# Load saved weights -effnetb2.load_state_dict( - torch.load( - f="09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth", - map_location=torch.device("cpu"), # load to CPU - ) -) - -### 3. Predict function ### - -# Create predict function -def predict(img) -> Tuple[Dict, float]: - """Transforms and performs a prediction on img and returns prediction and time taken. - """ - # Start the timer - start_time = timer() - - # Transform the target image and add a batch dimension - img = effnetb2_transforms(img).unsqueeze(0) - - # Put model into evaluation mode and turn on inference mode - effnetb2.eval() - with torch.inference_mode(): - # Pass the transformed image through the model and turn the prediction logits into prediction probabilities - pred_probs = torch.softmax(effnetb2(img), dim=1) - - # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter) - pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))} - - # Calculate the prediction time - pred_time = round(timer() - start_time, 5) - - # Return the prediction dictionary and prediction time - return pred_labels_and_probs, pred_time - -### 4. Gradio app ### - -# Create title, description and article strings -title = "FoodVision Mini 🍕🥩🍣" -description = "An EfficientNetB2 feature extractor computer vision model to classify images of food as pizza, steak or sushi." -article = "Created at [09. PyTorch Model Deployment](https://www.learnpytorch.io/09_pytorch_model_deployment/)." - -# Create examples list from "examples/" directory -example_list = [["examples/" + example] for example in os.listdir("examples")] - -# Create the Gradio demo -demo = gr.Interface(fn=predict, # mapping function from input to output - inputs=gr.Image(type="pil"), # what are the inputs? - outputs=[gr.Label(num_top_classes=3, label="Predictions"), # what are the outputs? - gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs - # Create examples list from "examples/" directory - examples=example_list, - title=title, - description=description, - article=article) - -# Launch the demo! -demo.launch() diff --git a/spaces/AIConsultant/MusicGen/audiocraft/utils/utils.py b/spaces/AIConsultant/MusicGen/audiocraft/utils/utils.py deleted file mode 100644 index 3135d70e949a058095ef84dd87b49384546c465c..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/utils/utils.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from contextlib import contextmanager -from functools import wraps, lru_cache -import hashlib -import json -import logging -from pathlib import Path -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def model_hash(model: torch.nn.Module) -> str: - """Return a model hash. This should allow us to track regressions in model init - from the logs of past experiments. - """ - hasher = hashlib.sha1() - for p in model.parameters(): - hasher.update(p.data.cpu().numpy().tobytes()) - return hasher.hexdigest() - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens - - -# TODO: Move to flashy? -def copy_state(state: tp.Any, device: tp.Union[torch.device, str] = 'cpu', - dtype: tp.Optional[torch.dtype] = None) -> tp.Any: - if isinstance(state, torch.Tensor): - if dtype is None or not state.is_floating_point(): - dtype = state.dtype - return state.detach().to(device=device, dtype=dtype, copy=True) - elif isinstance(state, dict): - return {k: copy_state(v, device, dtype) for k, v in state.items()} - elif isinstance(state, list): - return [copy_state(v, device, dtype) for v in state] - - -# TODO: Move to flashy? -@contextmanager -def swap_state(model, state, **kwargs): - old_state = copy_state(model.state_dict()) - model.load_state_dict(state, **kwargs) - try: - yield - finally: - model.load_state_dict(old_state) - - -@lru_cache(None) -def warn_once(logger, msg): - """Warn about a given message only once.""" - logger.warning(msg) - - -def is_jsonable(x: tp.Any): - """Check if an object can be serialized into a json:""" - try: - json.dumps(x) - return True - except (TypeError, OverflowError): - return False - - -def load_clap_state_dict(clap_model, path: tp.Union[str, Path]): - """Wrapper around state dict loading of CLAP model - addressing compatibility issues between CLAP and AudioCraft - HuggingFace transformer version. - See: https://github.com/LAION-AI/CLAP/issues/118 - """ - from clap_module.factory import load_state_dict # type: ignore - pkg = load_state_dict(path) - pkg.pop('text_branch.embeddings.position_ids', None) - clap_model.model.load_state_dict(pkg) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/models/melgan.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/models/melgan.py deleted file mode 100644 index e021ae4817a8c1c97338e61b00b230c881836fd8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/models/melgan.py +++ /dev/null @@ -1,427 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""MelGAN Modules.""" - -import logging - -import numpy as np -import torch - -from modules.parallel_wavegan.layers import CausalConv1d -from modules.parallel_wavegan.layers import CausalConvTranspose1d -from modules.parallel_wavegan.layers import ResidualStack - - -class MelGANGenerator(torch.nn.Module): - """MelGAN generator module.""" - - def __init__(self, - in_channels=80, - out_channels=1, - kernel_size=7, - channels=512, - bias=True, - upsample_scales=[8, 8, 2, 2], - stack_kernel_size=3, - stacks=3, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_final_nonlinear_activation=True, - use_weight_norm=True, - use_causal_conv=False, - ): - """Initialize MelGANGenerator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_size (int): Kernel size of initial and final conv layer. - channels (int): Initial number of channels for conv layer. - bias (bool): Whether to add bias parameter in convolution layers. - upsample_scales (list): List of upsampling scales. - stack_kernel_size (int): Kernel size of dilated conv layers in residual stack. - stacks (int): Number of stacks in a single residual stack. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_final_nonlinear_activation (torch.nn.Module): Activation function for the final layer. - use_weight_norm (bool): Whether to use weight norm. - If set to true, it will be applied to all of the conv layers. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANGenerator, self).__init__() - - # check hyper parameters is valid - assert channels >= np.prod(upsample_scales) - assert channels % (2 ** len(upsample_scales)) == 0 - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - - # add initial layer - layers = [] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(in_channels, channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - - for i, upsample_scale in enumerate(upsample_scales): - # add upsampling layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - torch.nn.ConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - padding=upsample_scale // 2 + upsample_scale % 2, - output_padding=upsample_scale % 2, - bias=bias, - ) - ] - else: - layers += [ - CausalConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - bias=bias, - ) - ] - - # add residual stack - for j in range(stacks): - layers += [ - ResidualStack( - kernel_size=stack_kernel_size, - channels=channels // (2 ** (i + 1)), - dilation=stack_kernel_size ** j, - bias=bias, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - use_causal_conv=use_causal_conv, - ) - ] - - # add final layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - if use_final_nonlinear_activation: - layers += [torch.nn.Tanh()] - - # define the model as a single function - self.melgan = torch.nn.Sequential(*layers) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, c): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, 1, T ** prod(upsample_scales)). - - """ - return self.melgan(c) - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) - - -class MelGANDiscriminator(torch.nn.Module): - """MelGAN discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - ): - """Initilize MelGAN discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (list): List of two kernel sizes. The prod will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15, - the last two layers' kernel size will be 5 and 3, respectively. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - - """ - super(MelGANDiscriminator, self).__init__() - self.layers = torch.nn.ModuleList() - - # check kernel size is valid - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - - # add first layer - self.layers += [ - torch.nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, np.prod(kernel_sizes), bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - - # add downsample layers - in_chs = channels - for downsample_scale in downsample_scales: - out_chs = min(in_chs * downsample_scale, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, - kernel_size=downsample_scale * 10 + 1, - stride=downsample_scale, - padding=downsample_scale * 5, - groups=in_chs // 4, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - in_chs = out_chs - - # add final layers - out_chs = min(in_chs * 2, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, kernel_sizes[0], - padding=(kernel_sizes[0] - 1) // 2, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - self.layers += [ - torch.nn.Conv1d( - out_chs, out_channels, kernel_sizes[1], - padding=(kernel_sizes[1] - 1) // 2, - bias=bias, - ), - ] - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of output tensors of each layer. - - """ - outs = [] - for f in self.layers: - x = f(x) - outs += [x] - - return outs - - -class MelGANMultiScaleDiscriminator(torch.nn.Module): - """MelGAN multi-scale discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - scales=3, - downsample_pooling="AvgPool1d", - # follow the official implementation setting - downsample_pooling_params={ - "kernel_size": 4, - "stride": 2, - "padding": 1, - "count_include_pad": False, - }, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_weight_norm=True, - ): - """Initilize MelGAN multi-scale discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_pooling (str): Pooling module name for downsampling of the inputs. - downsample_pooling_params (dict): Parameters for the above pooling module. - kernel_sizes (list): List of two kernel sizes. The sum will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANMultiScaleDiscriminator, self).__init__() - self.discriminators = torch.nn.ModuleList() - - # add discriminators - for _ in range(scales): - self.discriminators += [ - MelGANDiscriminator( - in_channels=in_channels, - out_channels=out_channels, - kernel_sizes=kernel_sizes, - channels=channels, - max_downsample_channels=max_downsample_channels, - bias=bias, - downsample_scales=downsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - ) - ] - self.pooling = getattr(torch.nn, downsample_pooling)(**downsample_pooling_params) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of list of each discriminator outputs, which consists of each layer output tensors. - - """ - outs = [] - for f in self.discriminators: - outs += [f(x)] - x = self.pooling(x) - - return outs - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs.py deleted file mode 100644 index 295e35083df2a056ca71c171fef732958be12685..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/fs.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.distributions -import torch.nn.functional as F -import torch.optim -import torch.utils.data - -from text_to_speech.modules.tts.fs import FastSpeech -from tasks.tts.dataset_utils import FastSpeechWordDataset -from tasks.tts.speech_base import SpeechBaseTask -from text_to_speech.utils.audio.align import mel2token_to_dur -from text_to_speech.utils.audio.pitch.utils import denorm_f0 -from text_to_speech.utils.commons.hparams import hparams - - -class FastSpeechTask(SpeechBaseTask): - def __init__(self): - super().__init__() - self.dataset_cls = FastSpeechWordDataset - self.sil_ph = self.token_encoder.sil_phonemes() - - def build_tts_model(self): - dict_size = len(self.token_encoder) - self.model = FastSpeech(dict_size, hparams) - - def run_model(self, sample, infer=False, *args, **kwargs): - txt_tokens = sample['txt_tokens'] # [B, T_t] - spk_embed = sample.get('spk_embed') - spk_id = sample.get('spk_ids') - if not infer: - target = sample['mels'] # [B, T_s, 80] - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample.get('f0') - uv = sample.get('uv') - output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id, - f0=f0, uv=uv, infer=False, - ph2word=sample['ph2word'], - graph_lst=sample.get('graph_lst'), - etypes_lst=sample.get('etypes_lst'), - bert_feats=sample.get("bert_feats"), - cl_feats=sample.get("cl_feats") - ) - losses = {} - self.add_mel_loss(output['mel_out'], target, losses) - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - return losses, output - else: - use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur']) - use_gt_f0 = kwargs.get('infer_use_gt_f0', hparams['use_gt_f0']) - mel2ph, uv, f0 = None, None, None - if use_gt_dur: - mel2ph = sample['mel2ph'] - if use_gt_f0: - f0 = sample['f0'] - uv = sample['uv'] - output = self.model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id, - f0=f0, uv=uv, infer=True, - ph2word=sample['ph2word'], - graph_lst=sample.get('graph_lst'), - etypes_lst=sample.get('etypes_lst'), - bert_feats=sample.get("bert_feats"), - cl_feats=sample.get("cl_feats") - ) - return output - - def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, losses=None): - """ - - :param dur_pred: [B, T], float, log scale - :param mel2ph: [B, T] - :param txt_tokens: [B, T] - :param losses: - :return: - """ - B, T = txt_tokens.shape - nonpadding = (txt_tokens != 0).float() - dur_gt = mel2token_to_dur(mel2ph, T).float() * nonpadding - is_sil = torch.zeros_like(txt_tokens).bool() - for p in self.sil_ph: - is_sil = is_sil | (txt_tokens == self.token_encoder.encode(p)[0]) - is_sil = is_sil.float() # [B, T_txt] - losses['pdur'] = F.mse_loss((dur_pred + 1).log(), (dur_gt + 1).log(), reduction='none') - losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum() - losses['pdur'] = losses['pdur'] * hparams['lambda_ph_dur'] - # use linear scale for sentence and word duration - if hparams['lambda_word_dur'] > 0: - word_id = (is_sil.cumsum(-1) * (1 - is_sil)).long() - word_dur_p = dur_pred.new_zeros([B, word_id.max() + 1]).scatter_add(1, word_id, dur_pred)[:, 1:] - word_dur_g = dur_gt.new_zeros([B, word_id.max() + 1]).scatter_add(1, word_id, dur_gt)[:, 1:] - wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none') - word_nonpadding = (word_dur_g > 0).float() - wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum() - losses['wdur'] = wdur_loss * hparams['lambda_word_dur'] - if hparams['lambda_sent_dur'] > 0: - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean') - losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur'] - - def add_pitch_loss(self, output, sample, losses): - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample['f0'] - uv = sample['uv'] - nonpadding = (mel2ph != 0).float() if hparams['pitch_type'] == 'frame' \ - else (sample['txt_tokens'] != 0).float() - p_pred = output['pitch_pred'] - assert p_pred[..., 0].shape == f0.shape - if hparams['use_uv'] and hparams['pitch_type'] == 'frame': - assert p_pred[..., 1].shape == uv.shape, (p_pred.shape, uv.shape) - losses['uv'] = (F.binary_cross_entropy_with_logits( - p_pred[:, :, 1], uv, reduction='none') * nonpadding).sum() \ - / nonpadding.sum() * hparams['lambda_uv'] - nonpadding = nonpadding * (uv == 0).float() - f0_pred = p_pred[:, :, 0] - losses['f0'] = (F.l1_loss(f0_pred, f0, reduction='none') * nonpadding).sum() \ - / nonpadding.sum() * hparams['lambda_f0'] - - def save_valid_result(self, sample, batch_idx, model_out): - sr = hparams['audio_sample_rate'] - f0_gt = None - mel_out = model_out['mel_out'] - if sample.get('f0') is not None: - f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu()) - self.plot_mel(batch_idx, sample['mels'], mel_out, f0s=f0_gt) - if self.global_step > 0: - wav_pred = self.vocoder.spec2wav(mel_out[0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_val_{batch_idx}', wav_pred, self.global_step, sr) - # with gt duration - model_out = self.run_model(sample, infer=True, infer_use_gt_dur=True) - dur_info = self.get_plot_dur_info(sample, model_out) - del dur_info['dur_pred'] - wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_gdur_{batch_idx}', wav_pred, self.global_step, sr) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_gdur_{batch_idx}', - dur_info=dur_info, f0s=f0_gt) - - # with pred duration - if not hparams['use_gt_dur']: - model_out = self.run_model(sample, infer=True, infer_use_gt_dur=False) - dur_info = self.get_plot_dur_info(sample, model_out) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_pdur_{batch_idx}', - dur_info=dur_info, f0s=f0_gt) - wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_pdur_{batch_idx}', wav_pred, self.global_step, sr) - # gt wav - if self.global_step <= hparams['valid_infer_interval']: - mel_gt = sample['mels'][0].cpu() - wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt) - self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, sr) - - def get_plot_dur_info(self, sample, model_out): - T_txt = sample['txt_tokens'].shape[1] - dur_gt = mel2token_to_dur(sample['mel2ph'], T_txt)[0] - dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt - txt = self.token_encoder.decode(sample['txt_tokens'][0].cpu().numpy()) - txt = txt.split(" ") - return {'dur_gt': dur_gt, 'dur_pred': dur_pred, 'txt': txt} - - def test_step(self, sample, batch_idx): - """ - - :param sample: - :param batch_idx: - :return: - """ - assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference' - outputs = self.run_model(sample, infer=True) - text = sample['text'][0] - item_name = sample['item_name'][0] - tokens = sample['txt_tokens'][0].cpu().numpy() - mel_gt = sample['mels'][0].cpu().numpy() - mel_pred = outputs['mel_out'][0].cpu().numpy() - mel2ph = sample['mel2ph'][0].cpu().numpy() - mel2ph_pred = outputs['mel2ph'][0].cpu().numpy() - str_phs = self.token_encoder.decode(tokens, strip_padding=True) - base_fn = f'[{batch_idx:06d}][{item_name.replace("%", "_")}][%s]' - if text is not None: - base_fn += text.replace(":", "$3A")[:80] - base_fn = base_fn.replace(' ', '_') - gen_dir = self.gen_dir - wav_pred = self.vocoder.spec2wav(mel_pred) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs, mel2ph_pred]) - if hparams['save_gt']: - wav_gt = self.vocoder.spec2wav(mel_gt) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs, mel2ph]) - print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}") - return { - 'item_name': item_name, - 'text': text, - 'ph_tokens': self.token_encoder.decode(tokens.tolist()), - 'wav_fn_pred': base_fn % 'P', - 'wav_fn_gt': base_fn % 'G', - } diff --git a/spaces/ALSv/FSW/README.md b/spaces/ALSv/FSW/README.md deleted file mode 100644 index 42b0e9076cccb451550d991e0e1b486f2f7e50d8..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Apex_Face -emoji: 🖤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: true -license: bigcode-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192.py deleted file mode 100644 index ff2c2e097cc87ac4ddd852b1ef8c2a91ed051fd6..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192.py +++ /dev/null @@ -1,2861 +0,0 @@ -default_scope = 'mmpose' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=50), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict( - type='CheckpointHook', interval=10, save_best='PCK', rule='greater'), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='PoseVisualizationHook', enable=False)) -custom_hooks = [dict(type='SyncBuffersHook')] -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) -vis_backends = [dict(type='LocalVisBackend')] -visualizer = dict( - type='PoseLocalVisualizer', - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')], - name='visualizer') -log_processor = dict( - type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) -log_level = 'INFO' -load_from = None -resume = False -backend_args = dict(backend='local') -train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) -val_cfg = dict() -test_cfg = dict() -colors = dict( - sss=[255, 128, 0], - lss=[255, 0, 128], - sso=[128, 0, 255], - lso=[0, 128, 255], - vest=[0, 128, 128], - sling=[0, 0, 128], - shorts=[128, 128, 128], - trousers=[128, 0, 128], - skirt=[64, 128, 128], - ssd=[64, 64, 128], - lsd=[128, 64, 0], - vd=[128, 64, 255], - sd=[128, 64, 0]) -dataset_info = dict( - dataset_name='deepfashion2', - paper_info=dict( - author= - 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo', - title= - 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images', - container= - 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)', - year='2019', - homepage='https://github.com/switchablenorms/DeepFashion2'), - keypoint_info=dict({ - 0: - dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''), - 1: - dict( - name='sss_kpt2', - id=1, - color=[255, 128, 0], - type='', - swap='sss_kpt6'), - 2: - dict( - name='sss_kpt3', - id=2, - color=[255, 128, 0], - type='', - swap='sss_kpt5'), - 3: - dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''), - 4: - dict( - name='sss_kpt5', - id=4, - color=[255, 128, 0], - type='', - swap='sss_kpt3'), - 5: - dict( - name='sss_kpt6', - id=5, - color=[255, 128, 0], - type='', - swap='sss_kpt2'), - 6: - dict( - name='sss_kpt7', - id=6, - color=[255, 128, 0], - type='', - swap='sss_kpt25'), - 7: - dict( - name='sss_kpt8', - id=7, - color=[255, 128, 0], - type='', - swap='sss_kpt24'), - 8: - dict( - name='sss_kpt9', - id=8, - color=[255, 128, 0], - type='', - swap='sss_kpt23'), - 9: - dict( - name='sss_kpt10', - id=9, - color=[255, 128, 0], - type='', - swap='sss_kpt22'), - 10: - dict( - name='sss_kpt11', - id=10, - color=[255, 128, 0], - type='', - swap='sss_kpt21'), - 11: - dict( - name='sss_kpt12', - id=11, - color=[255, 128, 0], - type='', - swap='sss_kpt20'), - 12: - dict( - name='sss_kpt13', - id=12, - color=[255, 128, 0], - type='', - swap='sss_kpt19'), - 13: - dict( - name='sss_kpt14', - id=13, - color=[255, 128, 0], - type='', - swap='sss_kpt18'), - 14: - dict( - name='sss_kpt15', - id=14, - color=[255, 128, 0], - type='', - swap='sss_kpt17'), - 15: - dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''), - 16: - dict( - name='sss_kpt17', - id=16, - color=[255, 128, 0], - type='', - swap='sss_kpt15'), - 17: - dict( - name='sss_kpt18', - id=17, - color=[255, 128, 0], - type='', - swap='sss_kpt14'), - 18: - dict( - name='sss_kpt19', - id=18, - color=[255, 128, 0], - type='', - swap='sss_kpt13'), - 19: - dict( - name='sss_kpt20', - id=19, - color=[255, 128, 0], - type='', - swap='sss_kpt12'), - 20: - dict( - name='sss_kpt21', - id=20, - color=[255, 128, 0], - type='', - swap='sss_kpt11'), - 21: - dict( - name='sss_kpt22', - id=21, - color=[255, 128, 0], - type='', - swap='sss_kpt10'), - 22: - dict( - name='sss_kpt23', - id=22, - color=[255, 128, 0], - type='', - swap='sss_kpt9'), - 23: - dict( - name='sss_kpt24', - id=23, - color=[255, 128, 0], - type='', - swap='sss_kpt8'), - 24: - dict( - name='sss_kpt25', - id=24, - color=[255, 128, 0], - type='', - swap='sss_kpt7'), - 25: - dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''), - 26: - dict( - name='lss_kpt2', - id=26, - color=[255, 0, 128], - type='', - swap='lss_kpt6'), - 27: - dict( - name='lss_kpt3', - id=27, - color=[255, 0, 128], - type='', - swap='lss_kpt5'), - 28: - dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''), - 29: - dict( - name='lss_kpt5', - id=29, - color=[255, 0, 128], - type='', - swap='lss_kpt3'), - 30: - dict( - name='lss_kpt6', - id=30, - color=[255, 0, 128], - type='', - swap='lss_kpt2'), - 31: - dict( - name='lss_kpt7', - id=31, - color=[255, 0, 128], - type='', - swap='lss_kpt33'), - 32: - dict( - name='lss_kpt8', - id=32, - color=[255, 0, 128], - type='', - swap='lss_kpt32'), - 33: - dict( - name='lss_kpt9', - id=33, - color=[255, 0, 128], - type='', - swap='lss_kpt31'), - 34: - dict( - name='lss_kpt10', - id=34, - color=[255, 0, 128], - type='', - swap='lss_kpt30'), - 35: - dict( - name='lss_kpt11', - id=35, - color=[255, 0, 128], - type='', - swap='lss_kpt29'), - 36: - dict( - name='lss_kpt12', - id=36, - color=[255, 0, 128], - type='', - swap='lss_kpt28'), - 37: - dict( - name='lss_kpt13', - id=37, - color=[255, 0, 128], - type='', - swap='lss_kpt27'), - 38: - dict( - name='lss_kpt14', - id=38, - color=[255, 0, 128], - type='', - swap='lss_kpt26'), - 39: - dict( - name='lss_kpt15', - id=39, - color=[255, 0, 128], - type='', - swap='lss_kpt25'), - 40: - dict( - name='lss_kpt16', - id=40, - color=[255, 0, 128], - type='', - swap='lss_kpt24'), - 41: - dict( - name='lss_kpt17', - id=41, - color=[255, 0, 128], - type='', - swap='lss_kpt23'), - 42: - dict( - name='lss_kpt18', - id=42, - color=[255, 0, 128], - type='', - swap='lss_kpt22'), - 43: - dict( - name='lss_kpt19', - id=43, - color=[255, 0, 128], - type='', - swap='lss_kpt21'), - 44: - dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''), - 45: - dict( - name='lss_kpt21', - id=45, - color=[255, 0, 128], - type='', - swap='lss_kpt19'), - 46: - dict( - name='lss_kpt22', - id=46, - color=[255, 0, 128], - type='', - swap='lss_kpt18'), - 47: - dict( - name='lss_kpt23', - id=47, - color=[255, 0, 128], - type='', - swap='lss_kpt17'), - 48: - dict( - name='lss_kpt24', - id=48, - color=[255, 0, 128], - type='', - swap='lss_kpt16'), - 49: - dict( - name='lss_kpt25', - id=49, - color=[255, 0, 128], - type='', - swap='lss_kpt15'), - 50: - dict( - name='lss_kpt26', - id=50, - color=[255, 0, 128], - type='', - swap='lss_kpt14'), - 51: - dict( - name='lss_kpt27', - id=51, - color=[255, 0, 128], - type='', - swap='lss_kpt13'), - 52: - dict( - name='lss_kpt28', - id=52, - color=[255, 0, 128], - type='', - swap='lss_kpt12'), - 53: - dict( - name='lss_kpt29', - id=53, - color=[255, 0, 128], - type='', - swap='lss_kpt11'), - 54: - dict( - name='lss_kpt30', - id=54, - color=[255, 0, 128], - type='', - swap='lss_kpt10'), - 55: - dict( - name='lss_kpt31', - id=55, - color=[255, 0, 128], - type='', - swap='lss_kpt9'), - 56: - dict( - name='lss_kpt32', - id=56, - color=[255, 0, 128], - type='', - swap='lss_kpt8'), - 57: - dict( - name='lss_kpt33', - id=57, - color=[255, 0, 128], - type='', - swap='lss_kpt7'), - 58: - dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''), - 59: - dict( - name='sso_kpt2', - id=59, - color=[128, 0, 255], - type='', - swap='sso_kpt26'), - 60: - dict( - name='sso_kpt3', - id=60, - color=[128, 0, 255], - type='', - swap='sso_kpt5'), - 61: - dict( - name='sso_kpt4', - id=61, - color=[128, 0, 255], - type='', - swap='sso_kpt6'), - 62: - dict( - name='sso_kpt5', - id=62, - color=[128, 0, 255], - type='', - swap='sso_kpt3'), - 63: - dict( - name='sso_kpt6', - id=63, - color=[128, 0, 255], - type='', - swap='sso_kpt4'), - 64: - dict( - name='sso_kpt7', - id=64, - color=[128, 0, 255], - type='', - swap='sso_kpt25'), - 65: - dict( - name='sso_kpt8', - id=65, - color=[128, 0, 255], - type='', - swap='sso_kpt24'), - 66: - dict( - name='sso_kpt9', - id=66, - color=[128, 0, 255], - type='', - swap='sso_kpt23'), - 67: - dict( - name='sso_kpt10', - id=67, - color=[128, 0, 255], - type='', - swap='sso_kpt22'), - 68: - dict( - name='sso_kpt11', - id=68, - color=[128, 0, 255], - type='', - swap='sso_kpt21'), - 69: - dict( - name='sso_kpt12', - id=69, - color=[128, 0, 255], - type='', - swap='sso_kpt20'), - 70: - dict( - name='sso_kpt13', - id=70, - color=[128, 0, 255], - type='', - swap='sso_kpt19'), - 71: - dict( - name='sso_kpt14', - id=71, - color=[128, 0, 255], - type='', - swap='sso_kpt18'), - 72: - dict( - name='sso_kpt15', - id=72, - color=[128, 0, 255], - type='', - swap='sso_kpt17'), - 73: - dict( - name='sso_kpt16', - id=73, - color=[128, 0, 255], - type='', - swap='sso_kpt29'), - 74: - dict( - name='sso_kpt17', - id=74, - color=[128, 0, 255], - type='', - swap='sso_kpt15'), - 75: - dict( - name='sso_kpt18', - id=75, - color=[128, 0, 255], - type='', - swap='sso_kpt14'), - 76: - dict( - name='sso_kpt19', - id=76, - color=[128, 0, 255], - type='', - swap='sso_kpt13'), - 77: - dict( - name='sso_kpt20', - id=77, - color=[128, 0, 255], - type='', - swap='sso_kpt12'), - 78: - dict( - name='sso_kpt21', - id=78, - color=[128, 0, 255], - type='', - swap='sso_kpt11'), - 79: - dict( - name='sso_kpt22', - id=79, - color=[128, 0, 255], - type='', - swap='sso_kpt10'), - 80: - dict( - name='sso_kpt23', - id=80, - color=[128, 0, 255], - type='', - swap='sso_kpt9'), - 81: - dict( - name='sso_kpt24', - id=81, - color=[128, 0, 255], - type='', - swap='sso_kpt8'), - 82: - dict( - name='sso_kpt25', - id=82, - color=[128, 0, 255], - type='', - swap='sso_kpt7'), - 83: - dict( - name='sso_kpt26', - id=83, - color=[128, 0, 255], - type='', - swap='sso_kpt2'), - 84: - dict( - name='sso_kpt27', - id=84, - color=[128, 0, 255], - type='', - swap='sso_kpt30'), - 85: - dict( - name='sso_kpt28', - id=85, - color=[128, 0, 255], - type='', - swap='sso_kpt31'), - 86: - dict( - name='sso_kpt29', - id=86, - color=[128, 0, 255], - type='', - swap='sso_kpt16'), - 87: - dict( - name='sso_kpt30', - id=87, - color=[128, 0, 255], - type='', - swap='sso_kpt27'), - 88: - dict( - name='sso_kpt31', - id=88, - color=[128, 0, 255], - type='', - swap='sso_kpt28'), - 89: - dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''), - 90: - dict( - name='lso_kpt2', - id=90, - color=[0, 128, 255], - type='', - swap='lso_kpt6'), - 91: - dict( - name='lso_kpt3', - id=91, - color=[0, 128, 255], - type='', - swap='lso_kpt5'), - 92: - dict( - name='lso_kpt4', - id=92, - color=[0, 128, 255], - type='', - swap='lso_kpt34'), - 93: - dict( - name='lso_kpt5', - id=93, - color=[0, 128, 255], - type='', - swap='lso_kpt3'), - 94: - dict( - name='lso_kpt6', - id=94, - color=[0, 128, 255], - type='', - swap='lso_kpt2'), - 95: - dict( - name='lso_kpt7', - id=95, - color=[0, 128, 255], - type='', - swap='lso_kpt33'), - 96: - dict( - name='lso_kpt8', - id=96, - color=[0, 128, 255], - type='', - swap='lso_kpt32'), - 97: - dict( - name='lso_kpt9', - id=97, - color=[0, 128, 255], - type='', - swap='lso_kpt31'), - 98: - dict( - name='lso_kpt10', - id=98, - color=[0, 128, 255], - type='', - swap='lso_kpt30'), - 99: - dict( - name='lso_kpt11', - id=99, - color=[0, 128, 255], - type='', - swap='lso_kpt29'), - 100: - dict( - name='lso_kpt12', - id=100, - color=[0, 128, 255], - type='', - swap='lso_kpt28'), - 101: - dict( - name='lso_kpt13', - id=101, - color=[0, 128, 255], - type='', - swap='lso_kpt27'), - 102: - dict( - name='lso_kpt14', - id=102, - color=[0, 128, 255], - type='', - swap='lso_kpt26'), - 103: - dict( - name='lso_kpt15', - id=103, - color=[0, 128, 255], - type='', - swap='lso_kpt25'), - 104: - dict( - name='lso_kpt16', - id=104, - color=[0, 128, 255], - type='', - swap='lso_kpt24'), - 105: - dict( - name='lso_kpt17', - id=105, - color=[0, 128, 255], - type='', - swap='lso_kpt23'), - 106: - dict( - name='lso_kpt18', - id=106, - color=[0, 128, 255], - type='', - swap='lso_kpt22'), - 107: - dict( - name='lso_kpt19', - id=107, - color=[0, 128, 255], - type='', - swap='lso_kpt21'), - 108: - dict( - name='lso_kpt20', - id=108, - color=[0, 128, 255], - type='', - swap='lso_kpt37'), - 109: - dict( - name='lso_kpt21', - id=109, - color=[0, 128, 255], - type='', - swap='lso_kpt19'), - 110: - dict( - name='lso_kpt22', - id=110, - color=[0, 128, 255], - type='', - swap='lso_kpt18'), - 111: - dict( - name='lso_kpt23', - id=111, - color=[0, 128, 255], - type='', - swap='lso_kpt17'), - 112: - dict( - name='lso_kpt24', - id=112, - color=[0, 128, 255], - type='', - swap='lso_kpt16'), - 113: - dict( - name='lso_kpt25', - id=113, - color=[0, 128, 255], - type='', - swap='lso_kpt15'), - 114: - dict( - name='lso_kpt26', - id=114, - color=[0, 128, 255], - type='', - swap='lso_kpt14'), - 115: - dict( - name='lso_kpt27', - id=115, - color=[0, 128, 255], - type='', - swap='lso_kpt13'), - 116: - dict( - name='lso_kpt28', - id=116, - color=[0, 128, 255], - type='', - swap='lso_kpt12'), - 117: - dict( - name='lso_kpt29', - id=117, - color=[0, 128, 255], - type='', - swap='lso_kpt11'), - 118: - dict( - name='lso_kpt30', - id=118, - color=[0, 128, 255], - type='', - swap='lso_kpt10'), - 119: - dict( - name='lso_kpt31', - id=119, - color=[0, 128, 255], - type='', - swap='lso_kpt9'), - 120: - dict( - name='lso_kpt32', - id=120, - color=[0, 128, 255], - type='', - swap='lso_kpt8'), - 121: - dict( - name='lso_kpt33', - id=121, - color=[0, 128, 255], - type='', - swap='lso_kpt7'), - 122: - dict( - name='lso_kpt34', - id=122, - color=[0, 128, 255], - type='', - swap='lso_kpt4'), - 123: - dict( - name='lso_kpt35', - id=123, - color=[0, 128, 255], - type='', - swap='lso_kpt38'), - 124: - dict( - name='lso_kpt36', - id=124, - color=[0, 128, 255], - type='', - swap='lso_kpt39'), - 125: - dict( - name='lso_kpt37', - id=125, - color=[0, 128, 255], - type='', - swap='lso_kpt20'), - 126: - dict( - name='lso_kpt38', - id=126, - color=[0, 128, 255], - type='', - swap='lso_kpt35'), - 127: - dict( - name='lso_kpt39', - id=127, - color=[0, 128, 255], - type='', - swap='lso_kpt36'), - 128: - dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''), - 129: - dict( - name='vest_kpt2', - id=129, - color=[0, 128, 128], - type='', - swap='vest_kpt6'), - 130: - dict( - name='vest_kpt3', - id=130, - color=[0, 128, 128], - type='', - swap='vest_kpt5'), - 131: - dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''), - 132: - dict( - name='vest_kpt5', - id=132, - color=[0, 128, 128], - type='', - swap='vest_kpt3'), - 133: - dict( - name='vest_kpt6', - id=133, - color=[0, 128, 128], - type='', - swap='vest_kpt2'), - 134: - dict( - name='vest_kpt7', - id=134, - color=[0, 128, 128], - type='', - swap='vest_kpt15'), - 135: - dict( - name='vest_kpt8', - id=135, - color=[0, 128, 128], - type='', - swap='vest_kpt14'), - 136: - dict( - name='vest_kpt9', - id=136, - color=[0, 128, 128], - type='', - swap='vest_kpt13'), - 137: - dict( - name='vest_kpt10', - id=137, - color=[0, 128, 128], - type='', - swap='vest_kpt12'), - 138: - dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''), - 139: - dict( - name='vest_kpt12', - id=139, - color=[0, 128, 128], - type='', - swap='vest_kpt10'), - 140: - dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''), - 141: - dict( - name='vest_kpt14', - id=141, - color=[0, 128, 128], - type='', - swap='vest_kpt8'), - 142: - dict( - name='vest_kpt15', - id=142, - color=[0, 128, 128], - type='', - swap='vest_kpt7'), - 143: - dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''), - 144: - dict( - name='sling_kpt2', - id=144, - color=[0, 0, 128], - type='', - swap='sling_kpt6'), - 145: - dict( - name='sling_kpt3', - id=145, - color=[0, 0, 128], - type='', - swap='sling_kpt5'), - 146: - dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''), - 147: - dict( - name='sling_kpt5', - id=147, - color=[0, 0, 128], - type='', - swap='sling_kpt3'), - 148: - dict( - name='sling_kpt6', - id=148, - color=[0, 0, 128], - type='', - swap='sling_kpt2'), - 149: - dict( - name='sling_kpt7', - id=149, - color=[0, 0, 128], - type='', - swap='sling_kpt15'), - 150: - dict( - name='sling_kpt8', - id=150, - color=[0, 0, 128], - type='', - swap='sling_kpt14'), - 151: - dict( - name='sling_kpt9', - id=151, - color=[0, 0, 128], - type='', - swap='sling_kpt13'), - 152: - dict( - name='sling_kpt10', - id=152, - color=[0, 0, 128], - type='', - swap='sling_kpt12'), - 153: - dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''), - 154: - dict( - name='sling_kpt12', - id=154, - color=[0, 0, 128], - type='', - swap='sling_kpt10'), - 155: - dict( - name='sling_kpt13', - id=155, - color=[0, 0, 128], - type='', - swap='sling_kpt9'), - 156: - dict( - name='sling_kpt14', - id=156, - color=[0, 0, 128], - type='', - swap='sling_kpt8'), - 157: - dict( - name='sling_kpt15', - id=157, - color=[0, 0, 128], - type='', - swap='sling_kpt7'), - 158: - dict( - name='shorts_kpt1', - id=158, - color=[128, 128, 128], - type='', - swap='shorts_kpt3'), - 159: - dict( - name='shorts_kpt2', - id=159, - color=[128, 128, 128], - type='', - swap=''), - 160: - dict( - name='shorts_kpt3', - id=160, - color=[128, 128, 128], - type='', - swap='shorts_kpt1'), - 161: - dict( - name='shorts_kpt4', - id=161, - color=[128, 128, 128], - type='', - swap='shorts_kpt10'), - 162: - dict( - name='shorts_kpt5', - id=162, - color=[128, 128, 128], - type='', - swap='shorts_kpt9'), - 163: - dict( - name='shorts_kpt6', - id=163, - color=[128, 128, 128], - type='', - swap='shorts_kpt8'), - 164: - dict( - name='shorts_kpt7', - id=164, - color=[128, 128, 128], - type='', - swap=''), - 165: - dict( - name='shorts_kpt8', - id=165, - color=[128, 128, 128], - type='', - swap='shorts_kpt6'), - 166: - dict( - name='shorts_kpt9', - id=166, - color=[128, 128, 128], - type='', - swap='shorts_kpt5'), - 167: - dict( - name='shorts_kpt10', - id=167, - color=[128, 128, 128], - type='', - swap='shorts_kpt4'), - 168: - dict( - name='trousers_kpt1', - id=168, - color=[128, 0, 128], - type='', - swap='trousers_kpt3'), - 169: - dict( - name='trousers_kpt2', - id=169, - color=[128, 0, 128], - type='', - swap=''), - 170: - dict( - name='trousers_kpt3', - id=170, - color=[128, 0, 128], - type='', - swap='trousers_kpt1'), - 171: - dict( - name='trousers_kpt4', - id=171, - color=[128, 0, 128], - type='', - swap='trousers_kpt14'), - 172: - dict( - name='trousers_kpt5', - id=172, - color=[128, 0, 128], - type='', - swap='trousers_kpt13'), - 173: - dict( - name='trousers_kpt6', - id=173, - color=[128, 0, 128], - type='', - swap='trousers_kpt12'), - 174: - dict( - name='trousers_kpt7', - id=174, - color=[128, 0, 128], - type='', - swap='trousers_kpt11'), - 175: - dict( - name='trousers_kpt8', - id=175, - color=[128, 0, 128], - type='', - swap='trousers_kpt10'), - 176: - dict( - name='trousers_kpt9', - id=176, - color=[128, 0, 128], - type='', - swap=''), - 177: - dict( - name='trousers_kpt10', - id=177, - color=[128, 0, 128], - type='', - swap='trousers_kpt8'), - 178: - dict( - name='trousers_kpt11', - id=178, - color=[128, 0, 128], - type='', - swap='trousers_kpt7'), - 179: - dict( - name='trousers_kpt12', - id=179, - color=[128, 0, 128], - type='', - swap='trousers_kpt6'), - 180: - dict( - name='trousers_kpt13', - id=180, - color=[128, 0, 128], - type='', - swap='trousers_kpt5'), - 181: - dict( - name='trousers_kpt14', - id=181, - color=[128, 0, 128], - type='', - swap='trousers_kpt4'), - 182: - dict( - name='skirt_kpt1', - id=182, - color=[64, 128, 128], - type='', - swap='skirt_kpt3'), - 183: - dict( - name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''), - 184: - dict( - name='skirt_kpt3', - id=184, - color=[64, 128, 128], - type='', - swap='skirt_kpt1'), - 185: - dict( - name='skirt_kpt4', - id=185, - color=[64, 128, 128], - type='', - swap='skirt_kpt8'), - 186: - dict( - name='skirt_kpt5', - id=186, - color=[64, 128, 128], - type='', - swap='skirt_kpt7'), - 187: - dict( - name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''), - 188: - dict( - name='skirt_kpt7', - id=188, - color=[64, 128, 128], - type='', - swap='skirt_kpt5'), - 189: - dict( - name='skirt_kpt8', - id=189, - color=[64, 128, 128], - type='', - swap='skirt_kpt4'), - 190: - dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''), - 191: - dict( - name='ssd_kpt2', - id=191, - color=[64, 64, 128], - type='', - swap='ssd_kpt6'), - 192: - dict( - name='ssd_kpt3', - id=192, - color=[64, 64, 128], - type='', - swap='ssd_kpt5'), - 193: - dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''), - 194: - dict( - name='ssd_kpt5', - id=194, - color=[64, 64, 128], - type='', - swap='ssd_kpt3'), - 195: - dict( - name='ssd_kpt6', - id=195, - color=[64, 64, 128], - type='', - swap='ssd_kpt2'), - 196: - dict( - name='ssd_kpt7', - id=196, - color=[64, 64, 128], - type='', - swap='ssd_kpt29'), - 197: - dict( - name='ssd_kpt8', - id=197, - color=[64, 64, 128], - type='', - swap='ssd_kpt28'), - 198: - dict( - name='ssd_kpt9', - id=198, - color=[64, 64, 128], - type='', - swap='ssd_kpt27'), - 199: - dict( - name='ssd_kpt10', - id=199, - color=[64, 64, 128], - type='', - swap='ssd_kpt26'), - 200: - dict( - name='ssd_kpt11', - id=200, - color=[64, 64, 128], - type='', - swap='ssd_kpt25'), - 201: - dict( - name='ssd_kpt12', - id=201, - color=[64, 64, 128], - type='', - swap='ssd_kpt24'), - 202: - dict( - name='ssd_kpt13', - id=202, - color=[64, 64, 128], - type='', - swap='ssd_kpt23'), - 203: - dict( - name='ssd_kpt14', - id=203, - color=[64, 64, 128], - type='', - swap='ssd_kpt22'), - 204: - dict( - name='ssd_kpt15', - id=204, - color=[64, 64, 128], - type='', - swap='ssd_kpt21'), - 205: - dict( - name='ssd_kpt16', - id=205, - color=[64, 64, 128], - type='', - swap='ssd_kpt20'), - 206: - dict( - name='ssd_kpt17', - id=206, - color=[64, 64, 128], - type='', - swap='ssd_kpt19'), - 207: - dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''), - 208: - dict( - name='ssd_kpt19', - id=208, - color=[64, 64, 128], - type='', - swap='ssd_kpt17'), - 209: - dict( - name='ssd_kpt20', - id=209, - color=[64, 64, 128], - type='', - swap='ssd_kpt16'), - 210: - dict( - name='ssd_kpt21', - id=210, - color=[64, 64, 128], - type='', - swap='ssd_kpt15'), - 211: - dict( - name='ssd_kpt22', - id=211, - color=[64, 64, 128], - type='', - swap='ssd_kpt14'), - 212: - dict( - name='ssd_kpt23', - id=212, - color=[64, 64, 128], - type='', - swap='ssd_kpt13'), - 213: - dict( - name='ssd_kpt24', - id=213, - color=[64, 64, 128], - type='', - swap='ssd_kpt12'), - 214: - dict( - name='ssd_kpt25', - id=214, - color=[64, 64, 128], - type='', - swap='ssd_kpt11'), - 215: - dict( - name='ssd_kpt26', - id=215, - color=[64, 64, 128], - type='', - swap='ssd_kpt10'), - 216: - dict( - name='ssd_kpt27', - id=216, - color=[64, 64, 128], - type='', - swap='ssd_kpt9'), - 217: - dict( - name='ssd_kpt28', - id=217, - color=[64, 64, 128], - type='', - swap='ssd_kpt8'), - 218: - dict( - name='ssd_kpt29', - id=218, - color=[64, 64, 128], - type='', - swap='ssd_kpt7'), - 219: - dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''), - 220: - dict( - name='lsd_kpt2', - id=220, - color=[128, 64, 0], - type='', - swap='lsd_kpt6'), - 221: - dict( - name='lsd_kpt3', - id=221, - color=[128, 64, 0], - type='', - swap='lsd_kpt5'), - 222: - dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''), - 223: - dict( - name='lsd_kpt5', - id=223, - color=[128, 64, 0], - type='', - swap='lsd_kpt3'), - 224: - dict( - name='lsd_kpt6', - id=224, - color=[128, 64, 0], - type='', - swap='lsd_kpt2'), - 225: - dict( - name='lsd_kpt7', - id=225, - color=[128, 64, 0], - type='', - swap='lsd_kpt37'), - 226: - dict( - name='lsd_kpt8', - id=226, - color=[128, 64, 0], - type='', - swap='lsd_kpt36'), - 227: - dict( - name='lsd_kpt9', - id=227, - color=[128, 64, 0], - type='', - swap='lsd_kpt35'), - 228: - dict( - name='lsd_kpt10', - id=228, - color=[128, 64, 0], - type='', - swap='lsd_kpt34'), - 229: - dict( - name='lsd_kpt11', - id=229, - color=[128, 64, 0], - type='', - swap='lsd_kpt33'), - 230: - dict( - name='lsd_kpt12', - id=230, - color=[128, 64, 0], - type='', - swap='lsd_kpt32'), - 231: - dict( - name='lsd_kpt13', - id=231, - color=[128, 64, 0], - type='', - swap='lsd_kpt31'), - 232: - dict( - name='lsd_kpt14', - id=232, - color=[128, 64, 0], - type='', - swap='lsd_kpt30'), - 233: - dict( - name='lsd_kpt15', - id=233, - color=[128, 64, 0], - type='', - swap='lsd_kpt29'), - 234: - dict( - name='lsd_kpt16', - id=234, - color=[128, 64, 0], - type='', - swap='lsd_kpt28'), - 235: - dict( - name='lsd_kpt17', - id=235, - color=[128, 64, 0], - type='', - swap='lsd_kpt27'), - 236: - dict( - name='lsd_kpt18', - id=236, - color=[128, 64, 0], - type='', - swap='lsd_kpt26'), - 237: - dict( - name='lsd_kpt19', - id=237, - color=[128, 64, 0], - type='', - swap='lsd_kpt25'), - 238: - dict( - name='lsd_kpt20', - id=238, - color=[128, 64, 0], - type='', - swap='lsd_kpt24'), - 239: - dict( - name='lsd_kpt21', - id=239, - color=[128, 64, 0], - type='', - swap='lsd_kpt23'), - 240: - dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''), - 241: - dict( - name='lsd_kpt23', - id=241, - color=[128, 64, 0], - type='', - swap='lsd_kpt21'), - 242: - dict( - name='lsd_kpt24', - id=242, - color=[128, 64, 0], - type='', - swap='lsd_kpt20'), - 243: - dict( - name='lsd_kpt25', - id=243, - color=[128, 64, 0], - type='', - swap='lsd_kpt19'), - 244: - dict( - name='lsd_kpt26', - id=244, - color=[128, 64, 0], - type='', - swap='lsd_kpt18'), - 245: - dict( - name='lsd_kpt27', - id=245, - color=[128, 64, 0], - type='', - swap='lsd_kpt17'), - 246: - dict( - name='lsd_kpt28', - id=246, - color=[128, 64, 0], - type='', - swap='lsd_kpt16'), - 247: - dict( - name='lsd_kpt29', - id=247, - color=[128, 64, 0], - type='', - swap='lsd_kpt15'), - 248: - dict( - name='lsd_kpt30', - id=248, - color=[128, 64, 0], - type='', - swap='lsd_kpt14'), - 249: - dict( - name='lsd_kpt31', - id=249, - color=[128, 64, 0], - type='', - swap='lsd_kpt13'), - 250: - dict( - name='lsd_kpt32', - id=250, - color=[128, 64, 0], - type='', - swap='lsd_kpt12'), - 251: - dict( - name='lsd_kpt33', - id=251, - color=[128, 64, 0], - type='', - swap='lsd_kpt11'), - 252: - dict( - name='lsd_kpt34', - id=252, - color=[128, 64, 0], - type='', - swap='lsd_kpt10'), - 253: - dict( - name='lsd_kpt35', - id=253, - color=[128, 64, 0], - type='', - swap='lsd_kpt9'), - 254: - dict( - name='lsd_kpt36', - id=254, - color=[128, 64, 0], - type='', - swap='lsd_kpt8'), - 255: - dict( - name='lsd_kpt37', - id=255, - color=[128, 64, 0], - type='', - swap='lsd_kpt7'), - 256: - dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''), - 257: - dict( - name='vd_kpt2', - id=257, - color=[128, 64, 255], - type='', - swap='vd_kpt6'), - 258: - dict( - name='vd_kpt3', - id=258, - color=[128, 64, 255], - type='', - swap='vd_kpt5'), - 259: - dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''), - 260: - dict( - name='vd_kpt5', - id=260, - color=[128, 64, 255], - type='', - swap='vd_kpt3'), - 261: - dict( - name='vd_kpt6', - id=261, - color=[128, 64, 255], - type='', - swap='vd_kpt2'), - 262: - dict( - name='vd_kpt7', - id=262, - color=[128, 64, 255], - type='', - swap='vd_kpt19'), - 263: - dict( - name='vd_kpt8', - id=263, - color=[128, 64, 255], - type='', - swap='vd_kpt18'), - 264: - dict( - name='vd_kpt9', - id=264, - color=[128, 64, 255], - type='', - swap='vd_kpt17'), - 265: - dict( - name='vd_kpt10', - id=265, - color=[128, 64, 255], - type='', - swap='vd_kpt16'), - 266: - dict( - name='vd_kpt11', - id=266, - color=[128, 64, 255], - type='', - swap='vd_kpt15'), - 267: - dict( - name='vd_kpt12', - id=267, - color=[128, 64, 255], - type='', - swap='vd_kpt14'), - 268: - dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''), - 269: - dict( - name='vd_kpt14', - id=269, - color=[128, 64, 255], - type='', - swap='vd_kpt12'), - 270: - dict( - name='vd_kpt15', - id=270, - color=[128, 64, 255], - type='', - swap='vd_kpt11'), - 271: - dict( - name='vd_kpt16', - id=271, - color=[128, 64, 255], - type='', - swap='vd_kpt10'), - 272: - dict( - name='vd_kpt17', - id=272, - color=[128, 64, 255], - type='', - swap='vd_kpt9'), - 273: - dict( - name='vd_kpt18', - id=273, - color=[128, 64, 255], - type='', - swap='vd_kpt8'), - 274: - dict( - name='vd_kpt19', - id=274, - color=[128, 64, 255], - type='', - swap='vd_kpt7'), - 275: - dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''), - 276: - dict( - name='sd_kpt2', - id=276, - color=[128, 64, 0], - type='', - swap='sd_kpt6'), - 277: - dict( - name='sd_kpt3', - id=277, - color=[128, 64, 0], - type='', - swap='sd_kpt5'), - 278: - dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''), - 279: - dict( - name='sd_kpt5', - id=279, - color=[128, 64, 0], - type='', - swap='sd_kpt3'), - 280: - dict( - name='sd_kpt6', - id=280, - color=[128, 64, 0], - type='', - swap='sd_kpt2'), - 281: - dict( - name='sd_kpt7', - id=281, - color=[128, 64, 0], - type='', - swap='sd_kpt19'), - 282: - dict( - name='sd_kpt8', - id=282, - color=[128, 64, 0], - type='', - swap='sd_kpt18'), - 283: - dict( - name='sd_kpt9', - id=283, - color=[128, 64, 0], - type='', - swap='sd_kpt17'), - 284: - dict( - name='sd_kpt10', - id=284, - color=[128, 64, 0], - type='', - swap='sd_kpt16'), - 285: - dict( - name='sd_kpt11', - id=285, - color=[128, 64, 0], - type='', - swap='sd_kpt15'), - 286: - dict( - name='sd_kpt12', - id=286, - color=[128, 64, 0], - type='', - swap='sd_kpt14'), - 287: - dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''), - 288: - dict( - name='sd_kpt14', - id=288, - color=[128, 64, 0], - type='', - swap='sd_kpt12'), - 289: - dict( - name='sd_kpt15', - id=289, - color=[128, 64, 0], - type='', - swap='sd_kpt11'), - 290: - dict( - name='sd_kpt16', - id=290, - color=[128, 64, 0], - type='', - swap='sd_kpt10'), - 291: - dict( - name='sd_kpt17', - id=291, - color=[128, 64, 0], - type='', - swap='sd_kpt9'), - 292: - dict( - name='sd_kpt18', - id=292, - color=[128, 64, 0], - type='', - swap='sd_kpt8'), - 293: - dict( - name='sd_kpt19', - id=293, - color=[128, 64, 0], - type='', - swap='sd_kpt7') - }), - skeleton_info=dict({ - 0: - dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]), - 1: - dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]), - 2: - dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]), - 3: - dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]), - 4: - dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]), - 5: - dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]), - 6: - dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]), - 7: - dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]), - 8: - dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]), - 9: - dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]), - 10: - dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]), - 11: - dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]), - 12: - dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]), - 13: - dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]), - 14: - dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]), - 15: - dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]), - 16: - dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]), - 17: - dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]), - 18: - dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]), - 19: - dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]), - 20: - dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]), - 21: - dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]), - 22: - dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]), - 23: - dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]), - 24: - dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]), - 25: - dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]), - 26: - dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]), - 27: - dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]), - 28: - dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]), - 29: - dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]), - 30: - dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]), - 31: - dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]), - 32: - dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]), - 33: - dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]), - 34: - dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]), - 35: - dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]), - 36: - dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]), - 37: - dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]), - 38: - dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]), - 39: - dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]), - 40: - dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]), - 41: - dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]), - 42: - dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]), - 43: - dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]), - 44: - dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]), - 45: - dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]), - 46: - dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]), - 47: - dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]), - 48: - dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]), - 49: - dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]), - 50: - dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]), - 51: - dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]), - 52: - dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]), - 53: - dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]), - 54: - dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]), - 55: - dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]), - 56: - dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]), - 57: - dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]), - 58: - dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]), - 59: - dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]), - 60: - dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]), - 61: - dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]), - 62: - dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]), - 63: - dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]), - 64: - dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]), - 65: - dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]), - 66: - dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]), - 67: - dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]), - 68: - dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]), - 69: - dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]), - 70: - dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]), - 71: - dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]), - 72: - dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]), - 73: - dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]), - 74: - dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]), - 75: - dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]), - 76: - dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]), - 77: - dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]), - 78: - dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]), - 79: - dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]), - 80: - dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]), - 81: - dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]), - 82: - dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]), - 83: - dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]), - 84: - dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]), - 85: - dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]), - 86: - dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]), - 87: - dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]), - 88: - dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]), - 89: - dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]), - 90: - dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]), - 91: - dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]), - 92: - dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]), - 93: - dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]), - 94: - dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]), - 95: - dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]), - 96: - dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]), - 97: - dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]), - 98: - dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]), - 99: - dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]), - 100: - dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]), - 101: - dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]), - 102: - dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]), - 103: - dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]), - 104: - dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]), - 105: - dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]), - 106: - dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]), - 107: - dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]), - 108: - dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]), - 109: - dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]), - 110: - dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]), - 111: - dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]), - 112: - dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]), - 113: - dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]), - 114: - dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]), - 115: - dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]), - 116: - dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]), - 117: - dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]), - 118: - dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]), - 119: - dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]), - 120: - dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]), - 121: - dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]), - 122: - dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]), - 123: - dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]), - 124: - dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]), - 125: - dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]), - 126: - dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]), - 127: - dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]), - 128: - dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]), - 129: - dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]), - 130: - dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]), - 131: - dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]), - 132: - dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]), - 133: - dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]), - 134: - dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]), - 135: - dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]), - 136: - dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]), - 137: - dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]), - 138: - dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]), - 139: - dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]), - 140: - dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]), - 141: - dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]), - 142: - dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]), - 143: - dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]), - 144: - dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]), - 145: - dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]), - 146: - dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]), - 147: - dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]), - 148: - dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]), - 149: - dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]), - 150: - dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]), - 151: - dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]), - 152: - dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]), - 153: - dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]), - 154: - dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]), - 155: - dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]), - 156: - dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]), - 157: - dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]), - 158: - dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]), - 159: - dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]), - 160: - dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]), - 161: - dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]), - 162: - dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]), - 163: - dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]), - 164: - dict( - link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128, - 128]), - 165: - dict( - link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128, - 128]), - 166: - dict( - link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128, - 128]), - 167: - dict( - link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128, - 128]), - 168: - dict( - link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128, - 128]), - 169: - dict( - link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128, - 128]), - 170: - dict( - link=('shorts_kpt9', 'shorts_kpt10'), - id=170, - color=[128, 128, 128]), - 171: - dict( - link=('shorts_kpt10', 'shorts_kpt3'), - id=171, - color=[128, 128, 128]), - 172: - dict( - link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128, - 128]), - 173: - dict( - link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128, - 128]), - 174: - dict( - link=('trousers_kpt1', 'trousers_kpt4'), - id=174, - color=[128, 0, 128]), - 175: - dict( - link=('trousers_kpt4', 'trousers_kpt5'), - id=175, - color=[128, 0, 128]), - 176: - dict( - link=('trousers_kpt5', 'trousers_kpt6'), - id=176, - color=[128, 0, 128]), - 177: - dict( - link=('trousers_kpt6', 'trousers_kpt7'), - id=177, - color=[128, 0, 128]), - 178: - dict( - link=('trousers_kpt7', 'trousers_kpt8'), - id=178, - color=[128, 0, 128]), - 179: - dict( - link=('trousers_kpt8', 'trousers_kpt9'), - id=179, - color=[128, 0, 128]), - 180: - dict( - link=('trousers_kpt9', 'trousers_kpt10'), - id=180, - color=[128, 0, 128]), - 181: - dict( - link=('trousers_kpt10', 'trousers_kpt11'), - id=181, - color=[128, 0, 128]), - 182: - dict( - link=('trousers_kpt11', 'trousers_kpt12'), - id=182, - color=[128, 0, 128]), - 183: - dict( - link=('trousers_kpt12', 'trousers_kpt13'), - id=183, - color=[128, 0, 128]), - 184: - dict( - link=('trousers_kpt13', 'trousers_kpt14'), - id=184, - color=[128, 0, 128]), - 185: - dict( - link=('trousers_kpt14', 'trousers_kpt3'), - id=185, - color=[128, 0, 128]), - 186: - dict( - link=('trousers_kpt3', 'trousers_kpt2'), - id=186, - color=[128, 0, 128]), - 187: - dict( - link=('trousers_kpt2', 'trousers_kpt1'), - id=187, - color=[128, 0, 128]), - 188: - dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]), - 189: - dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]), - 190: - dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]), - 191: - dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]), - 192: - dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]), - 193: - dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]), - 194: - dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]), - 195: - dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]), - 196: - dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]), - 197: - dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]), - 198: - dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]), - 199: - dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]), - 200: - dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]), - 201: - dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]), - 202: - dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]), - 203: - dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]), - 204: - dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]), - 205: - dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]), - 206: - dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]), - 207: - dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]), - 208: - dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]), - 209: - dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]), - 210: - dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]), - 211: - dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]), - 212: - dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]), - 213: - dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]), - 214: - dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]), - 215: - dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]), - 216: - dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]), - 217: - dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]), - 218: - dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]), - 219: - dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]), - 220: - dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]), - 221: - dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]), - 222: - dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]), - 223: - dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]), - 224: - dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]), - 225: - dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]), - 226: - dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]), - 227: - dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]), - 228: - dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]), - 229: - dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]), - 230: - dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]), - 231: - dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]), - 232: - dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]), - 233: - dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]), - 234: - dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]), - 235: - dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]), - 236: - dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]), - 237: - dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]), - 238: - dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]), - 239: - dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]), - 240: - dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]), - 241: - dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]), - 242: - dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]), - 243: - dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]), - 244: - dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]), - 245: - dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]), - 246: - dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]), - 247: - dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]), - 248: - dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]), - 249: - dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]), - 250: - dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]), - 251: - dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]), - 252: - dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]), - 253: - dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]), - 254: - dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]), - 255: - dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]), - 256: - dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]), - 257: - dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]), - 258: - dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]), - 259: - dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]), - 260: - dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]), - 261: - dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]), - 262: - dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]), - 263: - dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]), - 264: - dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]), - 265: - dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]), - 266: - dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]), - 267: - dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]), - 268: - dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]), - 269: - dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]), - 270: - dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]), - 271: - dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]), - 272: - dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]), - 273: - dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]), - 274: - dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]), - 275: - dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]), - 276: - dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]), - 277: - dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]), - 278: - dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]), - 279: - dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]), - 280: - dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]), - 281: - dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]), - 282: - dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]), - 283: - dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]), - 284: - dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]), - 285: - dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]), - 286: - dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]), - 287: - dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]), - 288: - dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]), - 289: - dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]), - 290: - dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]), - 291: - dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]), - 292: - dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]), - 293: - dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]), - 294: - dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]), - 295: - dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]), - 296: - dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]), - 297: - dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]), - 298: - dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]), - 299: - dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]), - 300: - dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]), - 301: - dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]), - 302: - dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]), - 303: - dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0]) - }), - joint_weights=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 - ], - sigmas=[]) -param_scheduler = [ - dict( - type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False), - dict( - type='MultiStepLR', - begin=0, - end=120, - milestones=[80, 100], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) -auto_scale_lr = dict(base_batch_size=512) -dataset_type = 'DeepFashion2Dataset' -data_mode = 'topdown' -data_root = 'data/deepfashion2/' -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') -] -val_pipeline = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') -] -train_dataloader = dict( - batch_size=64, - num_workers=6, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='train/deepfashion2_vest.json', - data_prefix=dict(img='train/image/'), - pipeline=[ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') - ])) -val_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_vest.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -test_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_vest.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) -model = dict( - type='TopdownPoseEstimator', - data_preprocessor=dict( - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - head=dict( - type='HeatmapHead', - in_channels=2048, - out_channels=294, - loss=dict(type='KeypointMSELoss', use_target_weight=True), - decoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True)) -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -test_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -launcher = 'pytorch' -work_dir = './work_dirs/td_hm_res50_4xb64-120e_deepfashion2_vest_256x192' diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.d.ts deleted file mode 100644 index e2b225621796992d6002854700331fc5df314b19..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/YAMLMake.d.ts +++ /dev/null @@ -1,15 +0,0 @@ -import Builders from './builders/Builders'; -export default YAMLMake; - -declare namespace YAMLMake { - type BuilderType = Builders.BuilderType; - type BuildersType = { [name: string]: BuilderType } -} - -declare function YAMLMake( - scene: Phaser.Scene, - data: Object | string, - view?: Object | string, - styles?: Object | string, - customBuilders?: YAMLMake.BuildersType -): Phaser.GameObjects.GameObject; \ No newline at end of file diff --git a/spaces/AiMimicry/sovits-models/data_utils.py b/spaces/AiMimicry/sovits-models/data_utils.py deleted file mode 100644 index 7c76fd1c3a45b8304d916161718c7763874f3e35..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/data_utils.py +++ /dev/null @@ -1,155 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import modules.commons as commons -import utils -from modules.mel_processing import spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text - -# import h5py - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams, all_in_mem: bool = False): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.spec_len = hparams.train.max_speclen - self.spk_map = hparams.spk - - random.seed(1234) - random.shuffle(self.audiopaths) - - self.all_in_mem = all_in_mem - if self.all_in_mem: - self.cache = [self.get_audio(p[0]) for p in self.audiopaths] - - def get_audio(self, filename): - filename = filename.replace("\\", "/") - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - - # Ideally, all data generated after Mar 25 should have .spec.pt - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split("/")[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - f0 = np.load(filename + ".f0.npy") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - c = torch.load(filename+ ".soft.pt") - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[0]) - - - lmin = min(c.size(-1), spec.size(-1)) - assert abs(c.size(-1) - spec.size(-1)) < 3, (c.size(-1), spec.size(-1), f0.shape, filename) - assert abs(audio_norm.shape[1]-lmin * self.hop_length) < 3 * self.hop_length - spec, c, f0, uv = spec[:, :lmin], c[:, :lmin], f0[:lmin], uv[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def random_slice(self, c, f0, spec, audio_norm, spk, uv): - # if spec.shape[1] < 30: - # print("skip too short audio:", filename) - # return None - if spec.shape[1] > 800: - start = random.randint(0, spec.shape[1]-800) - end = start + 790 - spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end] - audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def __getitem__(self, index): - if self.all_in_mem: - return self.random_slice(*self.cache[index]) - else: - return self.random_slice(*self.get_audio(self.audiopaths[index][0])) - - def __len__(self): - return len(self.audiopaths) - - -class TextAudioCollate: - - def __call__(self, batch): - batch = [b for b in batch if b is not None] - - input_lengths, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].shape[1] for x in batch]), - dim=0, descending=True) - - max_c_len = max([x[0].size(1) for x in batch]) - max_wav_len = max([x[3].size(1) for x in batch]) - - lengths = torch.LongTensor(len(batch)) - - c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len) - f0_padded = torch.FloatTensor(len(batch), max_c_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - spkids = torch.LongTensor(len(batch), 1) - uv_padded = torch.FloatTensor(len(batch), max_c_len) - - c_padded.zero_() - spec_padded.zero_() - f0_padded.zero_() - wav_padded.zero_() - uv_padded.zero_() - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - c = row[0] - c_padded[i, :, :c.size(1)] = c - lengths[i] = c.size(1) - - f0 = row[1] - f0_padded[i, :f0.size(0)] = f0 - - spec = row[2] - spec_padded[i, :, :spec.size(1)] = spec - - wav = row[3] - wav_padded[i, :, :wav.size(1)] = wav - - spkids[i, 0] = row[4] - - uv = row[5] - uv_padded[i, :uv.size(0)] = uv - - return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded diff --git a/spaces/Aki004/herta-so-vits/modules/enhancer.py b/spaces/Aki004/herta-so-vits/modules/enhancer.py deleted file mode 100644 index 37676311f7d8dc4ddc2a5244dedc27b2437e04f5..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/modules/enhancer.py +++ /dev/null @@ -1,105 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from vdecoder.nsf_hifigan.nvSTFT import STFT -from vdecoder.nsf_hifigan.models import load_model -from torchaudio.transforms import Resample - -class Enhancer: - def __init__(self, enhancer_type, enhancer_ckpt, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - - if enhancer_type == 'nsf-hifigan': - self.enhancer = NsfHifiGAN(enhancer_ckpt, device=self.device) - else: - raise ValueError(f" [x] Unknown enhancer: {enhancer_type}") - - self.resample_kernel = {} - self.enhancer_sample_rate = self.enhancer.sample_rate() - self.enhancer_hop_size = self.enhancer.hop_size() - - def enhance(self, - audio, # 1, T - sample_rate, - f0, # 1, n_frames, 1 - hop_size, - adaptive_key = 0, - silence_front = 0 - ): - # enhancer start time - start_frame = int(silence_front * sample_rate / hop_size) - real_silence_front = start_frame * hop_size / sample_rate - audio = audio[:, int(np.round(real_silence_front * sample_rate)) : ] - f0 = f0[: , start_frame :, :] - - # adaptive parameters - adaptive_factor = 2 ** ( -adaptive_key / 12) - adaptive_sample_rate = 100 * int(np.round(self.enhancer_sample_rate / adaptive_factor / 100)) - real_factor = self.enhancer_sample_rate / adaptive_sample_rate - - # resample the ddsp output - if sample_rate == adaptive_sample_rate: - audio_res = audio - else: - key_str = str(sample_rate) + str(adaptive_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(sample_rate, adaptive_sample_rate, lowpass_filter_width = 128).to(self.device) - audio_res = self.resample_kernel[key_str](audio) - - n_frames = int(audio_res.size(-1) // self.enhancer_hop_size + 1) - - # resample f0 - f0_np = f0.squeeze(0).squeeze(-1).cpu().numpy() - f0_np *= real_factor - time_org = (hop_size / sample_rate) * np.arange(len(f0_np)) / real_factor - time_frame = (self.enhancer_hop_size / self.enhancer_sample_rate) * np.arange(n_frames) - f0_res = np.interp(time_frame, time_org, f0_np, left=f0_np[0], right=f0_np[-1]) - f0_res = torch.from_numpy(f0_res).unsqueeze(0).float().to(self.device) # 1, n_frames - - # enhance - enhanced_audio, enhancer_sample_rate = self.enhancer(audio_res, f0_res) - - # resample the enhanced output - if adaptive_factor != 0: - key_str = str(adaptive_sample_rate) + str(enhancer_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(adaptive_sample_rate, enhancer_sample_rate, lowpass_filter_width = 128).to(self.device) - enhanced_audio = self.resample_kernel[key_str](enhanced_audio) - - # pad the silence frames - if start_frame > 0: - enhanced_audio = F.pad(enhanced_audio, (int(np.round(enhancer_sample_rate * real_silence_front)), 0)) - - return enhanced_audio, enhancer_sample_rate - - -class NsfHifiGAN(torch.nn.Module): - def __init__(self, model_path, device=None): - super().__init__() - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - print('| Load HifiGAN: ', model_path) - self.model, self.h = load_model(model_path, device=self.device) - - def sample_rate(self): - return self.h.sampling_rate - - def hop_size(self): - return self.h.hop_size - - def forward(self, audio, f0): - stft = STFT( - self.h.sampling_rate, - self.h.num_mels, - self.h.n_fft, - self.h.win_size, - self.h.hop_size, - self.h.fmin, - self.h.fmax) - with torch.no_grad(): - mel = stft.get_mel(audio) - enhanced_audio = self.model(mel, f0[:,:mel.size(-1)]).view(-1) - return enhanced_audio, self.h.sampling_rate \ No newline at end of file diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/mel_processing.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/mel_processing.py deleted file mode 100644 index 3614150259809983e776d3fed83021decca06a9c..0000000000000000000000000000000000000000 --- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y.float(), n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/Alesmikes/Elvirespeak/README.md b/spaces/Alesmikes/Elvirespeak/README.md deleted file mode 100644 index 282cf0ea4e4017c353a759079eae33673000ff2e..0000000000000000000000000000000000000000 --- a/spaces/Alesmikes/Elvirespeak/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: QnA -emoji: 📈 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -duplicated_from: GenAIDemo/economic-forecast ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlexWang/lama/saicinpainting/training/trainers/default.py b/spaces/AlexWang/lama/saicinpainting/training/trainers/default.py deleted file mode 100644 index 86c7f0fab42924bfc93a031e851117634c70f593..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/trainers/default.py +++ /dev/null @@ -1,175 +0,0 @@ -import logging - -import torch -import torch.nn.functional as F -from omegaconf import OmegaConf - -from saicinpainting.training.data.datasets import make_constant_area_crop_params -from saicinpainting.training.losses.distance_weighting import make_mask_distance_weighter -from saicinpainting.training.losses.feature_matching import feature_matching_loss, masked_l1_loss -from saicinpainting.training.modules.fake_fakes import FakeFakesGenerator -from saicinpainting.training.trainers.base import BaseInpaintingTrainingModule, make_multiscale_noise -from saicinpainting.utils import add_prefix_to_keys, get_ramp - -LOGGER = logging.getLogger(__name__) - - -def make_constant_area_crop_batch(batch, **kwargs): - crop_y, crop_x, crop_height, crop_width = make_constant_area_crop_params(img_height=batch['image'].shape[2], - img_width=batch['image'].shape[3], - **kwargs) - batch['image'] = batch['image'][:, :, crop_y : crop_y + crop_height, crop_x : crop_x + crop_width] - batch['mask'] = batch['mask'][:, :, crop_y: crop_y + crop_height, crop_x: crop_x + crop_width] - return batch - - -class DefaultInpaintingTrainingModule(BaseInpaintingTrainingModule): - def __init__(self, *args, concat_mask=True, rescale_scheduler_kwargs=None, image_to_discriminator='predicted_image', - add_noise_kwargs=None, noise_fill_hole=False, const_area_crop_kwargs=None, - distance_weighter_kwargs=None, distance_weighted_mask_for_discr=False, - fake_fakes_proba=0, fake_fakes_generator_kwargs=None, - **kwargs): - super().__init__(*args, **kwargs) - self.concat_mask = concat_mask - self.rescale_size_getter = get_ramp(**rescale_scheduler_kwargs) if rescale_scheduler_kwargs is not None else None - self.image_to_discriminator = image_to_discriminator - self.add_noise_kwargs = add_noise_kwargs - self.noise_fill_hole = noise_fill_hole - self.const_area_crop_kwargs = const_area_crop_kwargs - self.refine_mask_for_losses = make_mask_distance_weighter(**distance_weighter_kwargs) \ - if distance_weighter_kwargs is not None else None - self.distance_weighted_mask_for_discr = distance_weighted_mask_for_discr - - self.fake_fakes_proba = fake_fakes_proba - if self.fake_fakes_proba > 1e-3: - self.fake_fakes_gen = FakeFakesGenerator(**(fake_fakes_generator_kwargs or {})) - - def forward(self, batch): - if self.training and self.rescale_size_getter is not None: - cur_size = self.rescale_size_getter(self.global_step) - batch['image'] = F.interpolate(batch['image'], size=cur_size, mode='bilinear', align_corners=False) - batch['mask'] = F.interpolate(batch['mask'], size=cur_size, mode='nearest') - - if self.training and self.const_area_crop_kwargs is not None: - batch = make_constant_area_crop_batch(batch, **self.const_area_crop_kwargs) - - img = batch['image'] - mask = batch['mask'] - - masked_img = img * (1 - mask) - - if self.add_noise_kwargs is not None: - noise = make_multiscale_noise(masked_img, **self.add_noise_kwargs) - if self.noise_fill_hole: - masked_img = masked_img + mask * noise[:, :masked_img.shape[1]] - masked_img = torch.cat([masked_img, noise], dim=1) - - if self.concat_mask: - masked_img = torch.cat([masked_img, mask], dim=1) - - batch['predicted_image'] = self.generator(masked_img) - batch['inpainted'] = mask * batch['predicted_image'] + (1 - mask) * batch['image'] - - if self.fake_fakes_proba > 1e-3: - if self.training and torch.rand(1).item() < self.fake_fakes_proba: - batch['fake_fakes'], batch['fake_fakes_masks'] = self.fake_fakes_gen(img, mask) - batch['use_fake_fakes'] = True - else: - batch['fake_fakes'] = torch.zeros_like(img) - batch['fake_fakes_masks'] = torch.zeros_like(mask) - batch['use_fake_fakes'] = False - - batch['mask_for_losses'] = self.refine_mask_for_losses(img, batch['predicted_image'], mask) \ - if self.refine_mask_for_losses is not None and self.training \ - else mask - - return batch - - def generator_loss(self, batch): - img = batch['image'] - predicted_img = batch[self.image_to_discriminator] - original_mask = batch['mask'] - supervised_mask = batch['mask_for_losses'] - - # L1 - l1_value = masked_l1_loss(predicted_img, img, supervised_mask, - self.config.losses.l1.weight_known, - self.config.losses.l1.weight_missing) - - total_loss = l1_value - metrics = dict(gen_l1=l1_value) - - # vgg-based perceptual loss - if self.config.losses.perceptual.weight > 0: - pl_value = self.loss_pl(predicted_img, img, mask=supervised_mask).sum() * self.config.losses.perceptual.weight - total_loss = total_loss + pl_value - metrics['gen_pl'] = pl_value - - # discriminator - # adversarial_loss calls backward by itself - mask_for_discr = supervised_mask if self.distance_weighted_mask_for_discr else original_mask - self.adversarial_loss.pre_generator_step(real_batch=img, fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(img) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_gen_loss, adv_metrics = self.adversarial_loss.generator_loss(real_batch=img, - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=mask_for_discr) - total_loss = total_loss + adv_gen_loss - metrics['gen_adv'] = adv_gen_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - # feature matching - if self.config.losses.feature_matching.weight > 0: - need_mask_in_fm = OmegaConf.to_container(self.config.losses.feature_matching).get('pass_mask', False) - mask_for_fm = supervised_mask if need_mask_in_fm else None - fm_value = feature_matching_loss(discr_fake_features, discr_real_features, - mask=mask_for_fm) * self.config.losses.feature_matching.weight - total_loss = total_loss + fm_value - metrics['gen_fm'] = fm_value - - if self.loss_resnet_pl is not None: - resnet_pl_value = self.loss_resnet_pl(predicted_img, img) - total_loss = total_loss + resnet_pl_value - metrics['gen_resnet_pl'] = resnet_pl_value - - return total_loss, metrics - - def discriminator_loss(self, batch): - total_loss = 0 - metrics = {} - - predicted_img = batch[self.image_to_discriminator].detach() - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=predicted_img, - generator=self.generator, discriminator=self.discriminator) - discr_real_pred, discr_real_features = self.discriminator(batch['image']) - discr_fake_pred, discr_fake_features = self.discriminator(predicted_img) - adv_discr_loss, adv_metrics = self.adversarial_loss.discriminator_loss(real_batch=batch['image'], - fake_batch=predicted_img, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_pred, - mask=batch['mask']) - total_loss = total_loss + adv_discr_loss - metrics['discr_adv'] = adv_discr_loss - metrics.update(add_prefix_to_keys(adv_metrics, 'adv_')) - - - if batch.get('use_fake_fakes', False): - fake_fakes = batch['fake_fakes'] - self.adversarial_loss.pre_discriminator_step(real_batch=batch['image'], fake_batch=fake_fakes, - generator=self.generator, discriminator=self.discriminator) - discr_fake_fakes_pred, _ = self.discriminator(fake_fakes) - fake_fakes_adv_discr_loss, fake_fakes_adv_metrics = self.adversarial_loss.discriminator_loss( - real_batch=batch['image'], - fake_batch=fake_fakes, - discr_real_pred=discr_real_pred, - discr_fake_pred=discr_fake_fakes_pred, - mask=batch['mask'] - ) - total_loss = total_loss + fake_fakes_adv_discr_loss - metrics['discr_adv_fake_fakes'] = fake_fakes_adv_discr_loss - metrics.update(add_prefix_to_keys(fake_fakes_adv_metrics, 'adv_')) - - return total_loss, metrics diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/augment.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/augment.py deleted file mode 100644 index 8067f4e3fec058c9025edaa7a9a0442afe859ae5..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training/augment.py +++ /dev/null @@ -1,562 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Augmentation pipeline from the paper -"Training Generative Adversarial Networks with Limited Data". -Matches the original implementation by Karras et al. at -https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py""" - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -# ---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -# ---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant( - x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0] - vy = v[..., 1] - vz = v[..., 2] - s = torch.sin(theta) - c = torch.cos(theta) - cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -# ---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1, 1, 1, 1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - # Overall multiplier for augmentation probability. - self.register_buffer('p', torch.ones([])) - - # Pixel blitting. - # Probability multiplier for x-flip. - self.xflip = float(xflip) - # Probability multiplier for 90 degree rotations. - self.rotate90 = float(rotate90) - # Probability multiplier for integer translation. - self.xint = float(xint) - # Range of integer translation, relative to image dimensions. - self.xint_max = float(xint_max) - - # General geometric transformations. - # Probability multiplier for isotropic scaling. - self.scale = float(scale) - # Probability multiplier for arbitrary rotation. - self.rotate = float(rotate) - # Probability multiplier for anisotropic scaling. - self.aniso = float(aniso) - # Probability multiplier for fractional translation. - self.xfrac = float(xfrac) - # Log2 standard deviation of isotropic scaling. - self.scale_std = float(scale_std) - # Range of arbitrary rotation, 1 = full circle. - self.rotate_max = float(rotate_max) - # Log2 standard deviation of anisotropic scaling. - self.aniso_std = float(aniso_std) - # Standard deviation of frational translation, relative to image dimensions. - self.xfrac_std = float(xfrac_std) - - # Color transformations. - # Probability multiplier for brightness. - self.brightness = float(brightness) - # Probability multiplier for contrast. - self.contrast = float(contrast) - # Probability multiplier for luma flip. - self.lumaflip = float(lumaflip) - # Probability multiplier for hue rotation. - self.hue = float(hue) - # Probability multiplier for saturation. - self.saturation = float(saturation) - # Standard deviation of brightness. - self.brightness_std = float(brightness_std) - # Log2 standard deviation of contrast. - self.contrast_std = float(contrast_std) - # Range of hue rotation, 1 = full circle. - self.hue_max = float(hue_max) - # Log2 standard deviation of saturation. - self.saturation_std = float(saturation_std) - - # Image-space filtering. - # Probability multiplier for image-space filtering. - self.imgfilter = float(imgfilter) - # Probability multipliers for individual frequency bands. - self.imgfilter_bands = list(imgfilter_bands) - # Log2 standard deviation of image-space filter amplification. - self.imgfilter_std = float(imgfilter_std) - - # Image-space corruptions. - # Probability multiplier for additive RGB noise. - self.noise = float(noise) - # Probability multiplier for cutout. - self.cutout = float(cutout) - # Standard deviation of additive RGB noise. - self.noise_std = float(noise_std) - # Size of the cutout rectangle, relative to image dimensions. - self.cutout_size = float(cutout_size) - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer( - 'Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape( - Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // - 2: (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor( - Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor( - debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand( - [batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand( - [batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) - * 2 - 1) * self.xint_max - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like( - t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round( - t[:, 0] * width), torch.round(t[:, 1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - # P(pre OR post) = p - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv( - debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:, 0] * width, t[:, 1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], - [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute( - 1, 0, 2).flatten(1) # [xy, batch * idx] - # [x0, y0, x1, y1] - margin = torch.cat([-margin, margin]).max(dim=1).values - margin = margin + \ - misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] - * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant( - [width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad( - input=images, pad=[mx0, mx1, my0, my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d( - 2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, - device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, - (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv( - 2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid( - theta=G_inv[:, :2, :], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d( - x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand( - [batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv( - debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn( - [batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand( - [batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - # Luma axis. - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand( - [batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn( - [batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * \ - C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError( - 'Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - # Expected power spectrum (1/f). - expected_power = misc.constant( - np.array([10, 1, 1, 1]) / 13, device=device) - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - # Global gain vector (identity). - g = torch.ones([batch_size, num_bands], device=device) - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn( - [batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand( - [batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - # Temporary gain vector. - t = torch.ones([batch_size, num_bands], device=device) - # Replace i'th element. - t[:, i] = t_i - # Normalize power. - t = t / (expected_power * t.square() - ).sum(dim=-1, keepdims=True).sqrt() - # Accumulate into global gain. - g = g * t - - # Construct combined amplification filter. - # [batch, tap] - Hz_prime = g @ self.Hz_fbank - Hz_prime = Hz_prime.unsqueeze(1).repeat( - [1, num_channels, 1]) # [batch, channels, tap] - # [batch * channels, 1, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape( - [1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad( - input=images, pad=[p, p, p, p], mode='reflect') - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], - device=device).abs() * self.noise_std - sigma = torch.where(torch.rand( - [batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv( - debug_percentile) * self.noise_std) - images = images + \ - torch.randn([batch_size, num_channels, height, - width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], - self.cutout_size, device=device) - size = torch.where(torch.rand( - [batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange( - height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -# ---------------------------------------------------------------------------- diff --git a/spaces/An-619/FastSAM/utils/tools.py b/spaces/An-619/FastSAM/utils/tools.py deleted file mode 100644 index e57f8746f31c910a5940dd027ec588ee59e60f17..0000000000000000000000000000000000000000 --- a/spaces/An-619/FastSAM/utils/tools.py +++ /dev/null @@ -1,442 +0,0 @@ -import numpy as np -from PIL import Image -import matplotlib.pyplot as plt -import cv2 -import torch -import os -import sys -import clip - - -def convert_box_xywh_to_xyxy(box): - if len(box) == 4: - return [box[0], box[1], box[0] + box[2], box[1] + box[3]] - else: - result = [] - for b in box: - b = convert_box_xywh_to_xyxy(b) - result.append(b) - return result - - -def segment_image(image, bbox): - image_array = np.array(image) - segmented_image_array = np.zeros_like(image_array) - x1, y1, x2, y2 = bbox - segmented_image_array[y1:y2, x1:x2] = image_array[y1:y2, x1:x2] - segmented_image = Image.fromarray(segmented_image_array) - black_image = Image.new("RGB", image.size, (255, 255, 255)) - # transparency_mask = np.zeros_like((), dtype=np.uint8) - transparency_mask = np.zeros( - (image_array.shape[0], image_array.shape[1]), dtype=np.uint8 - ) - transparency_mask[y1:y2, x1:x2] = 255 - transparency_mask_image = Image.fromarray(transparency_mask, mode="L") - black_image.paste(segmented_image, mask=transparency_mask_image) - return black_image - - -def format_results(result, filter=0): - annotations = [] - n = len(result.masks.data) - for i in range(n): - annotation = {} - mask = result.masks.data[i] == 1.0 - - if torch.sum(mask) < filter: - continue - annotation["id"] = i - annotation["segmentation"] = mask.cpu().numpy() - annotation["bbox"] = result.boxes.data[i] - annotation["score"] = result.boxes.conf[i] - annotation["area"] = annotation["segmentation"].sum() - annotations.append(annotation) - return annotations - - -def filter_masks(annotations): # filter the overlap mask - annotations.sort(key=lambda x: x["area"], reverse=True) - to_remove = set() - for i in range(0, len(annotations)): - a = annotations[i] - for j in range(i + 1, len(annotations)): - b = annotations[j] - if i != j and j not in to_remove: - # check if - if b["area"] < a["area"]: - if (a["segmentation"] & b["segmentation"]).sum() / b[ - "segmentation" - ].sum() > 0.8: - to_remove.add(j) - - return [a for i, a in enumerate(annotations) if i not in to_remove], to_remove - - -def get_bbox_from_mask(mask): - mask = mask.astype(np.uint8) - contours, hierarchy = cv2.findContours( - mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE - ) - x1, y1, w, h = cv2.boundingRect(contours[0]) - x2, y2 = x1 + w, y1 + h - if len(contours) > 1: - for b in contours: - x_t, y_t, w_t, h_t = cv2.boundingRect(b) - # 将多个bbox合并成一个 - x1 = min(x1, x_t) - y1 = min(y1, y_t) - x2 = max(x2, x_t + w_t) - y2 = max(y2, y_t + h_t) - h = y2 - y1 - w = x2 - x1 - return [x1, y1, x2, y2] - - -def fast_process( - annotations, args, mask_random_color, bbox=None, points=None, edges=False -): - if isinstance(annotations[0], dict): - annotations = [annotation["segmentation"] for annotation in annotations] - result_name = os.path.basename(args.img_path) - image = cv2.imread(args.img_path) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - original_h = image.shape[0] - original_w = image.shape[1] - if sys.platform == "darwin": - plt.switch_backend("TkAgg") - plt.figure(figsize=(original_w/100, original_h/100)) - # Add subplot with no margin. - plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0, wspace=0) - plt.margins(0, 0) - plt.gca().xaxis.set_major_locator(plt.NullLocator()) - plt.gca().yaxis.set_major_locator(plt.NullLocator()) - plt.imshow(image) - if args.better_quality == True: - if isinstance(annotations[0], torch.Tensor): - annotations = np.array(annotations.cpu()) - for i, mask in enumerate(annotations): - mask = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8) - ) - annotations[i] = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8) - ) - if args.device == "cpu": - annotations = np.array(annotations) - fast_show_mask( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - points=points, - point_label=args.point_label, - retinamask=args.retina, - target_height=original_h, - target_width=original_w, - ) - else: - if isinstance(annotations[0], np.ndarray): - annotations = torch.from_numpy(annotations) - fast_show_mask_gpu( - annotations, - plt.gca(), - random_color=args.randomcolor, - bbox=bbox, - points=points, - point_label=args.point_label, - retinamask=args.retina, - target_height=original_h, - target_width=original_w, - ) - if isinstance(annotations, torch.Tensor): - annotations = annotations.cpu().numpy() - if args.withContours == True: - contour_all = [] - temp = np.zeros((original_h, original_w, 1)) - for i, mask in enumerate(annotations): - if type(mask) == dict: - mask = mask["segmentation"] - annotation = mask.astype(np.uint8) - if args.retina == False: - annotation = cv2.resize( - annotation, - (original_w, original_h), - interpolation=cv2.INTER_NEAREST, - ) - contours, hierarchy = cv2.findContours( - annotation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - for contour in contours: - contour_all.append(contour) - cv2.drawContours(temp, contour_all, -1, (255, 255, 255), 2) - color = np.array([0 / 255, 0 / 255, 255 / 255, 0.8]) - contour_mask = temp / 255 * color.reshape(1, 1, -1) - plt.imshow(contour_mask) - - save_path = args.output - if not os.path.exists(save_path): - os.makedirs(save_path) - plt.axis("off") - fig = plt.gcf() - plt.draw() - - try: - buf = fig.canvas.tostring_rgb() - except AttributeError: - fig.canvas.draw() - buf = fig.canvas.tostring_rgb() - - cols, rows = fig.canvas.get_width_height() - img_array = np.fromstring(buf, dtype=np.uint8).reshape(rows, cols, 3) - cv2.imwrite(os.path.join(save_path, result_name), cv2.cvtColor(img_array, cv2.COLOR_RGB2BGR)) - - -# CPU post process -def fast_show_mask( - annotation, - ax, - random_color=False, - bbox=None, - points=None, - point_label=None, - retinamask=True, - target_height=960, - target_width=960, -): - msak_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - # 将annotation 按照面积 排序 - areas = np.sum(annotation, axis=(1, 2)) - sorted_indices = np.argsort(areas) - annotation = annotation[sorted_indices] - - index = (annotation != 0).argmax(axis=0) - if random_color == True: - color = np.random.random((msak_sum, 1, 1, 3)) - else: - color = np.ones((msak_sum, 1, 1, 3)) * np.array( - [30 / 255, 144 / 255, 255 / 255] - ) - transparency = np.ones((msak_sum, 1, 1, 1)) * 0.6 - visual = np.concatenate([color, transparency], axis=-1) - mask_image = np.expand_dims(annotation, -1) * visual - - show = np.zeros((height, weight, 4)) - h_indices, w_indices = np.meshgrid( - np.arange(height), np.arange(weight), indexing="ij" - ) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - # 使用向量化索引更新show的值 - show[h_indices, w_indices, :] = mask_image[indices] - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - # draw point - if points is not None: - plt.scatter( - [point[0] for i, point in enumerate(points) if point_label[i] == 1], - [point[1] for i, point in enumerate(points) if point_label[i] == 1], - s=20, - c="y", - ) - plt.scatter( - [point[0] for i, point in enumerate(points) if point_label[i] == 0], - [point[1] for i, point in enumerate(points) if point_label[i] == 0], - s=20, - c="m", - ) - - if retinamask == False: - show = cv2.resize( - show, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - ax.imshow(show) - - -def fast_show_mask_gpu( - annotation, - ax, - random_color=False, - bbox=None, - points=None, - point_label=None, - retinamask=True, - target_height=960, - target_width=960, -): - msak_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - areas = torch.sum(annotation, dim=(1, 2)) - sorted_indices = torch.argsort(areas, descending=False) - annotation = annotation[sorted_indices] - # 找每个位置第一个非零值下标 - index = (annotation != 0).to(torch.long).argmax(dim=0) - if random_color == True: - color = torch.rand((msak_sum, 1, 1, 3)).to(annotation.device) - else: - color = torch.ones((msak_sum, 1, 1, 3)).to(annotation.device) * torch.tensor( - [30 / 255, 144 / 255, 255 / 255] - ).to(annotation.device) - transparency = torch.ones((msak_sum, 1, 1, 1)).to(annotation.device) * 0.6 - visual = torch.cat([color, transparency], dim=-1) - mask_image = torch.unsqueeze(annotation, -1) * visual - # 按index取数,index指每个位置选哪个batch的数,把mask_image转成一个batch的形式 - show = torch.zeros((height, weight, 4)).to(annotation.device) - h_indices, w_indices = torch.meshgrid( - torch.arange(height), torch.arange(weight), indexing="ij" - ) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - # 使用向量化索引更新show的值 - show[h_indices, w_indices, :] = mask_image[indices] - show_cpu = show.cpu().numpy() - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - # draw point - if points is not None: - plt.scatter( - [point[0] for i, point in enumerate(points) if point_label[i] == 1], - [point[1] for i, point in enumerate(points) if point_label[i] == 1], - s=20, - c="y", - ) - plt.scatter( - [point[0] for i, point in enumerate(points) if point_label[i] == 0], - [point[1] for i, point in enumerate(points) if point_label[i] == 0], - s=20, - c="m", - ) - if retinamask == False: - show_cpu = cv2.resize( - show_cpu, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - ax.imshow(show_cpu) - - -# clip -@torch.no_grad() -def retriev( - model, preprocess, elements: [Image.Image], search_text: str, device -): - preprocessed_images = [preprocess(image).to(device) for image in elements] - tokenized_text = clip.tokenize([search_text]).to(device) - stacked_images = torch.stack(preprocessed_images) - image_features = model.encode_image(stacked_images) - text_features = model.encode_text(tokenized_text) - image_features /= image_features.norm(dim=-1, keepdim=True) - text_features /= text_features.norm(dim=-1, keepdim=True) - probs = 100.0 * image_features @ text_features.T - return probs[:, 0].softmax(dim=0) - - -def crop_image(annotations, image_like): - if isinstance(image_like, str): - image = Image.open(image_like) - else: - image = image_like - ori_w, ori_h = image.size - mask_h, mask_w = annotations[0]["segmentation"].shape - if ori_w != mask_w or ori_h != mask_h: - image = image.resize((mask_w, mask_h)) - cropped_boxes = [] - cropped_images = [] - not_crop = [] - origin_id = [] - for _, mask in enumerate(annotations): - if np.sum(mask["segmentation"]) <= 100: - continue - origin_id.append(_) - bbox = get_bbox_from_mask(mask["segmentation"]) # mask 的 bbox - cropped_boxes.append(segment_image(image, bbox)) # 保存裁剪的图片 - # cropped_boxes.append(segment_image(image,mask["segmentation"])) - cropped_images.append(bbox) # 保存裁剪的图片的bbox - return cropped_boxes, cropped_images, not_crop, origin_id, annotations - - -def box_prompt(masks, bbox, target_height, target_width): - h = masks.shape[1] - w = masks.shape[2] - if h != target_height or w != target_width: - bbox = [ - int(bbox[0] * w / target_width), - int(bbox[1] * h / target_height), - int(bbox[2] * w / target_width), - int(bbox[3] * h / target_height), - ] - bbox[0] = round(bbox[0]) if round(bbox[0]) > 0 else 0 - bbox[1] = round(bbox[1]) if round(bbox[1]) > 0 else 0 - bbox[2] = round(bbox[2]) if round(bbox[2]) < w else w - bbox[3] = round(bbox[3]) if round(bbox[3]) < h else h - - # IoUs = torch.zeros(len(masks), dtype=torch.float32) - bbox_area = (bbox[3] - bbox[1]) * (bbox[2] - bbox[0]) - - masks_area = torch.sum(masks[:, bbox[1] : bbox[3], bbox[0] : bbox[2]], dim=(1, 2)) - orig_masks_area = torch.sum(masks, dim=(1, 2)) - - union = bbox_area + orig_masks_area - masks_area - IoUs = masks_area / union - max_iou_index = torch.argmax(IoUs) - - return masks[max_iou_index].cpu().numpy(), max_iou_index - - -def point_prompt(masks, points, point_label, target_height, target_width): # numpy 处理 - h = masks[0]["segmentation"].shape[0] - w = masks[0]["segmentation"].shape[1] - if h != target_height or w != target_width: - points = [ - [int(point[0] * w / target_width), int(point[1] * h / target_height)] - for point in points - ] - onemask = np.zeros((h, w)) - masks = sorted(masks, key=lambda x: x['area'], reverse=True) - for i, annotation in enumerate(masks): - if type(annotation) == dict: - mask = annotation['segmentation'] - else: - mask = annotation - for i, point in enumerate(points): - if mask[point[1], point[0]] == 1 and point_label[i] == 1: - onemask[mask] = 1 - if mask[point[1], point[0]] == 1 and point_label[i] == 0: - onemask[mask] = 0 - onemask = onemask >= 1 - return onemask, 0 - - -def text_prompt(annotations, text, img_path, device, wider=False, threshold=0.9): - cropped_boxes, cropped_images, not_crop, origin_id, annotations_ = crop_image( - annotations, img_path - ) - clip_model, preprocess = clip.load("./weights/CLIP_ViT_B_32.pt", device=device) - scores = retriev( - clip_model, preprocess, cropped_boxes, text, device=device - ) - max_idx = scores.argsort() - max_idx = max_idx[-1] - max_idx = origin_id[int(max_idx)] - - # find the biggest mask which contains the mask with max score - if wider: - mask0 = annotations_[max_idx]["segmentation"] - area0 = np.sum(mask0) - areas = [(i, np.sum(mask["segmentation"])) for i, mask in enumerate(annotations_) if i in origin_id] - areas = sorted(areas, key=lambda area: area[1], reverse=True) - indices = [area[0] for area in areas] - for index in indices: - if index == max_idx or np.sum(annotations_[index]["segmentation"] & mask0) / area0 > threshold: - max_idx = index - break - - return annotations_[max_idx]["segmentation"], max_idx diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py deleted file mode 100644 index 7d94b9f230d88e878f05f2611af8d34f9a4dcd26..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py +++ /dev/null @@ -1,622 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import tempfile -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMInverseScheduler, - DDIMScheduler, - DDPMScheduler, - EulerAncestralDiscreteScheduler, - LMSDiscreteScheduler, - StableDiffusionPix2PixZeroPipeline, - UNet2DConditionModel, -) -from diffusers.image_processor import VaeImageProcessor -from diffusers.utils import floats_tensor, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, load_image, load_pt, require_torch_gpu, skip_mps - -from ..pipeline_params import ( - TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, - TEXT_GUIDED_IMAGE_VARIATION_PARAMS, - TEXT_TO_IMAGE_IMAGE_PARAMS, -) -from ..test_pipelines_common import ( - PipelineLatentTesterMixin, - PipelineTesterMixin, - assert_mean_pixel_difference, -) - - -enable_full_determinism() - - -@skip_mps -class StableDiffusionPix2PixZeroPipelineFastTests(PipelineLatentTesterMixin, PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionPix2PixZeroPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"image"} - batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS - image_params = TEXT_TO_IMAGE_IMAGE_PARAMS - image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS - - @classmethod - def setUpClass(cls): - cls.source_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/src_emb_0.pt" - ) - - cls.target_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/tgt_emb_0.pt" - ) - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = DDIMScheduler() - inverse_scheduler = DDIMInverseScheduler() - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - "inverse_scheduler": inverse_scheduler, - "caption_generator": None, - "caption_processor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - generator = torch.manual_seed(seed) - - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "cross_attention_guidance_amount": 0.15, - "source_embeds": self.source_embeds, - "target_embeds": self.target_embeds, - "output_type": "numpy", - } - return inputs - - def get_dummy_inversion_inputs(self, device, seed=0): - dummy_image = floats_tensor((2, 3, 32, 32), rng=random.Random(seed)).to(torch_device) - dummy_image = dummy_image / 2 + 0.5 - generator = torch.manual_seed(seed) - - inputs = { - "prompt": [ - "A painting of a squirrel eating a burger", - "A painting of a burger eating a squirrel", - ], - "image": dummy_image.cpu(), - "num_inference_steps": 2, - "guidance_scale": 6.0, - "generator": generator, - "output_type": "numpy", - } - return inputs - - def get_dummy_inversion_inputs_by_type(self, device, seed=0, input_image_type="pt", output_type="np"): - inputs = self.get_dummy_inversion_inputs(device, seed) - - if input_image_type == "pt": - image = inputs["image"] - elif input_image_type == "np": - image = VaeImageProcessor.pt_to_numpy(inputs["image"]) - elif input_image_type == "pil": - image = VaeImageProcessor.pt_to_numpy(inputs["image"]) - image = VaeImageProcessor.numpy_to_pil(image) - else: - raise ValueError(f"unsupported input_image_type {input_image_type}") - - inputs["image"] = image - inputs["output_type"] = output_type - - return inputs - - def test_save_load_optional_components(self): - if not hasattr(self.pipeline_class, "_optional_components"): - return - - components = self.get_dummy_components() - pipe = self.pipeline_class(**components) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - # set all optional components to None and update pipeline config accordingly - for optional_component in pipe._optional_components: - setattr(pipe, optional_component, None) - pipe.register_modules(**{optional_component: None for optional_component in pipe._optional_components}) - - inputs = self.get_dummy_inputs(torch_device) - output = pipe(**inputs)[0] - - with tempfile.TemporaryDirectory() as tmpdir: - pipe.save_pretrained(tmpdir) - pipe_loaded = self.pipeline_class.from_pretrained(tmpdir) - pipe_loaded.to(torch_device) - pipe_loaded.set_progress_bar_config(disable=None) - - for optional_component in pipe._optional_components: - self.assertTrue( - getattr(pipe_loaded, optional_component) is None, - f"`{optional_component}` did not stay set to None after loading.", - ) - - inputs = self.get_dummy_inputs(torch_device) - output_loaded = pipe_loaded(**inputs)[0] - - max_diff = np.abs(output - output_loaded).max() - self.assertLess(max_diff, 1e-4) - - def test_stable_diffusion_pix2pix_zero_inversion(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inversion_inputs(device) - inputs["image"] = inputs["image"][:1] - inputs["prompt"] = inputs["prompt"][:1] - image = sd_pipe.invert(**inputs).images - image_slice = image[0, -3:, -3:, -1] - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.4823, 0.4783, 0.5638, 0.5201, 0.5247, 0.5644, 0.5029, 0.5404, 0.5062]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_inversion_batch(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inversion_inputs(device) - image = sd_pipe.invert(**inputs).images - image_slice = image[1, -3:, -3:, -1] - assert image.shape == (2, 32, 32, 3) - expected_slice = np.array([0.6446, 0.5232, 0.4914, 0.4441, 0.4654, 0.5546, 0.4650, 0.4938, 0.5044]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_default_case(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4863, 0.5053, 0.5033, 0.4007, 0.3571, 0.4768, 0.5176, 0.5277, 0.4940]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - negative_prompt = "french fries" - output = sd_pipe(**inputs, negative_prompt=negative_prompt) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5177, 0.5097, 0.5047, 0.4076, 0.3667, 0.4767, 0.5238, 0.5307, 0.4958]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerAncestralDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.5421, 0.5525, 0.6085, 0.5279, 0.4658, 0.5317, 0.4418, 0.4815, 0.5132]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_ddpm(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = DDPMScheduler() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4861, 0.5053, 0.5038, 0.3994, 0.3562, 0.4768, 0.5172, 0.5280, 0.4938]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_zero_inversion_pt_np_pil_outputs_equivalent(self): - device = torch_device - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - output_pt = sd_pipe.invert(**self.get_dummy_inversion_inputs_by_type(device, output_type="pt")).images - output_np = sd_pipe.invert(**self.get_dummy_inversion_inputs_by_type(device, output_type="np")).images - output_pil = sd_pipe.invert(**self.get_dummy_inversion_inputs_by_type(device, output_type="pil")).images - - max_diff = np.abs(output_pt.cpu().numpy().transpose(0, 2, 3, 1) - output_np).max() - self.assertLess(max_diff, 1e-4, "`output_type=='pt'` generate different results from `output_type=='np'`") - - max_diff = np.abs(np.array(output_pil[0]) - (output_np[0] * 255).round()).max() - self.assertLess(max_diff, 2.0, "`output_type=='pil'` generate different results from `output_type=='np'`") - - def test_stable_diffusion_pix2pix_zero_inversion_pt_np_pil_inputs_equivalent(self): - device = torch_device - components = self.get_dummy_components() - sd_pipe = StableDiffusionPix2PixZeroPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - out_input_pt = sd_pipe.invert(**self.get_dummy_inversion_inputs_by_type(device, input_image_type="pt")).images - out_input_np = sd_pipe.invert(**self.get_dummy_inversion_inputs_by_type(device, input_image_type="np")).images - out_input_pil = sd_pipe.invert( - **self.get_dummy_inversion_inputs_by_type(device, input_image_type="pil") - ).images - - max_diff = np.abs(out_input_pt - out_input_np).max() - self.assertLess(max_diff, 1e-4, "`input_type=='pt'` generate different result from `input_type=='np'`") - - assert_mean_pixel_difference(out_input_pil, out_input_np, expected_max_diff=1) - - # Non-determinism caused by the scheduler optimizing the latent inputs during inference - @unittest.skip("non-deterministic pipeline") - def test_inference_batch_single_identical(self): - return super().test_inference_batch_single_identical() - - -@slow -@require_torch_gpu -class StableDiffusionPix2PixZeroPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @classmethod - def setUpClass(cls): - cls.source_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat.pt" - ) - - cls.target_embeds = load_pt( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog.pt" - ) - - def get_inputs(self, seed=0): - generator = torch.manual_seed(seed) - - inputs = { - "prompt": "turn him into a cyborg", - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "cross_attention_guidance_amount": 0.15, - "source_embeds": self.source_embeds, - "target_embeds": self.target_embeds, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_zero_default(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.5742, 0.5757, 0.5747, 0.5781, 0.5688, 0.5713, 0.5742, 0.5664, 0.5747]) - - assert np.abs(expected_slice - image_slice).max() < 5e-2 - - def test_stable_diffusion_pix2pix_zero_k_lms(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.6367, 0.5459, 0.5146, 0.5479, 0.4905, 0.4753, 0.4961, 0.4629, 0.4624]) - - assert np.abs(expected_slice - image_slice).max() < 5e-2 - - def test_stable_diffusion_pix2pix_zero_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.1345, 0.268, 0.1539, 0.0726, 0.0959, 0.2261, -0.2673, 0.0277, -0.2062]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([0.1393, 0.2637, 0.1617, 0.0724, 0.0987, 0.2271, -0.2666, 0.0299, -0.2104]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == 3 - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs() - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 8.2 GB is allocated - assert mem_bytes < 8.2 * 10**9 - - -@slow -@require_torch_gpu -class InversionPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @classmethod - def setUpClass(cls): - raw_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png" - ) - - raw_image = raw_image.convert("RGB").resize((512, 512)) - - cls.raw_image = raw_image - - def test_stable_diffusion_pix2pix_inversion(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator, num_inference_steps=10) - inv_latents = output[0] - - image_slice = inv_latents[0, -3:, -3:, -1].flatten() - - assert inv_latents.shape == (1, 4, 64, 64) - expected_slice = np.array([0.8447, -0.0730, 0.7588, -1.2070, -0.4678, 0.1511, -0.8555, 1.1816, -0.7666]) - - assert np.abs(expected_slice - image_slice.cpu().numpy()).max() < 5e-2 - - def test_stable_diffusion_2_pix2pix_inversion(self): - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator, num_inference_steps=10) - inv_latents = output[0] - - image_slice = inv_latents[0, -3:, -3:, -1].flatten() - - assert inv_latents.shape == (1, 4, 64, 64) - expected_slice = np.array([0.8970, -0.1611, 0.4766, -1.1162, -0.5923, 0.1050, -0.9678, 1.0537, -0.6050]) - - assert np.abs(expected_slice - image_slice.cpu().numpy()).max() < 5e-2 - - def test_stable_diffusion_pix2pix_full(self): - # numpy array of https://huggingface.co/datasets/hf-internal-testing/diffusers-images/blob/main/pix2pix/dog.png - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog.npy" - ) - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator) - inv_latents = output[0] - - source_prompts = 4 * ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] - target_prompts = 4 * ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] - - source_embeds = pipe.get_embeds(source_prompts) - target_embeds = pipe.get_embeds(target_prompts) - - image = pipe( - caption, - source_embeds=source_embeds, - target_embeds=target_embeds, - num_inference_steps=50, - cross_attention_guidance_amount=0.15, - generator=generator, - latents=inv_latents, - negative_prompt=caption, - output_type="np", - ).images - - max_diff = np.abs(expected_image - image).mean() - assert max_diff < 0.05 - - def test_stable_diffusion_2_pix2pix_full(self): - # numpy array of https://huggingface.co/datasets/hf-internal-testing/diffusers-images/blob/main/pix2pix/dog_2.png - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/dog_2.npy" - ) - - pipe = StableDiffusionPix2PixZeroPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1", safety_checker=None, torch_dtype=torch.float16 - ) - pipe.inverse_scheduler = DDIMInverseScheduler.from_config(pipe.scheduler.config) - - caption = "a photography of a cat with flowers" - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.enable_model_cpu_offload() - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - output = pipe.invert(caption, image=self.raw_image, generator=generator) - inv_latents = output[0] - - source_prompts = 4 * ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] - target_prompts = 4 * ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] - - source_embeds = pipe.get_embeds(source_prompts) - target_embeds = pipe.get_embeds(target_prompts) - - image = pipe( - caption, - source_embeds=source_embeds, - target_embeds=target_embeds, - num_inference_steps=125, - cross_attention_guidance_amount=0.015, - generator=generator, - latents=inv_latents, - negative_prompt=caption, - output_type="np", - ).images - - mean_diff = np.abs(expected_image - image).mean() - assert mean_diff < 0.25 diff --git a/spaces/Andy1621/IAT_enhancement/model/__init__.py b/spaces/Andy1621/IAT_enhancement/model/__init__.py deleted file mode 100644 index be46bf1d519bdda61e4848f29cfad2545547e46c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/IAT_enhancement/model/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .IAT import IAT \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w18_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w18_20e_coco.py deleted file mode 100644 index 391636ff452471af367ed14be5faa49c0b7e1be6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w18_20e_coco.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './htc_hrnetv2p_w32_20e_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(18, 36)), - stage3=dict(num_channels=(18, 36, 72)), - stage4=dict(num_channels=(18, 36, 72, 144)))), - neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_mask_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_mask_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 047a293466a20ea90501e3054d7fcfe23fcdcb39..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_mask_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' - -model = dict( - roi_head=dict( - type='PISARoIHead', - bbox_head=dict( - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))), - train_cfg=dict( - rpn_proposal=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - sampler=dict( - type='ScoreHLRSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0.), - isr=dict(k=2, bias=0), - carl=dict(k=1, bias=0.2))), - test_cfg=dict( - rpn=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-model.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-model.py deleted file mode 100644 index 44109d36c222cc1e47215cbe40bf55ff8009b2d1..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-model.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python3 - -import requests - -HOST = '0.0.0.0:5000' - - -def generate(prompt, tokens=200): - request = {'prompt': prompt, 'max_new_tokens': tokens} - response = requests.post(f'http://{HOST}/api/v1/generate', json=request) - - if response.status_code == 200: - return response.json()['results'][0]['text'] - - -def model_api(request): - response = requests.post(f'http://{HOST}/api/v1/model', json=request) - return response.json() - - -# print some common settings -def print_basic_model_info(response): - basic_settings = ['truncation_length', 'instruction_template'] - print("Model: ", response['result']['model_name']) - print("Lora(s): ", response['result']['lora_names']) - for setting in basic_settings: - print(setting, "=", response['result']['shared.settings'][setting]) - - -# model info -def model_info(): - response = model_api({'action': 'info'}) - print_basic_model_info(response) - - -# simple loader -def model_load(model_name): - return model_api({'action': 'load', 'model_name': model_name}) - - -# complex loader -def complex_model_load(model): - - def guess_groupsize(model_name): - if '1024g' in model_name: - return 1024 - elif '128g' in model_name: - return 128 - elif '32g' in model_name: - return 32 - else: - return -1 - - req = { - 'action': 'load', - 'model_name': model, - 'args': { - 'loader': 'AutoGPTQ', - - 'bf16': False, - 'load_in_8bit': False, - 'groupsize': 0, - 'wbits': 0, - - # llama.cpp - 'threads': 0, - 'n_batch': 512, - 'no_mmap': False, - 'mlock': False, - 'cache_capacity': None, - 'n_gpu_layers': 0, - 'n_ctx': 2048, - - # RWKV - 'rwkv_strategy': None, - 'rwkv_cuda_on': False, - - # b&b 4-bit - # 'load_in_4bit': False, - # 'compute_dtype': 'float16', - # 'quant_type': 'nf4', - # 'use_double_quant': False, - - # "cpu": false, - # "auto_devices": false, - # "gpu_memory": null, - # "cpu_memory": null, - # "disk": false, - # "disk_cache_dir": "cache", - }, - } - - model = model.lower() - - if '4bit' in model or 'gptq' in model or 'int4' in model: - req['args']['wbits'] = 4 - req['args']['groupsize'] = guess_groupsize(model) - elif '3bit' in model: - req['args']['wbits'] = 3 - req['args']['groupsize'] = guess_groupsize(model) - else: - req['args']['gptq_for_llama'] = False - - if '8bit' in model: - req['args']['load_in_8bit'] = True - elif '-hf' in model or 'fp16' in model: - if '7b' in model: - req['args']['bf16'] = True # for 24GB - elif '13b' in model: - req['args']['load_in_8bit'] = True # for 24GB - elif 'gguf' in model: - # req['args']['threads'] = 16 - if '7b' in model: - req['args']['n_gpu_layers'] = 100 - elif '13b' in model: - req['args']['n_gpu_layers'] = 100 - elif '30b' in model or '33b' in model: - req['args']['n_gpu_layers'] = 59 # 24GB - elif '65b' in model: - req['args']['n_gpu_layers'] = 42 # 24GB - elif 'rwkv' in model: - req['args']['rwkv_cuda_on'] = True - if '14b' in model: - req['args']['rwkv_strategy'] = 'cuda f16i8' # 24GB - else: - req['args']['rwkv_strategy'] = 'cuda f16' # 24GB - - return model_api(req) - - -if __name__ == '__main__': - for model in model_api({'action': 'list'})['result']: - try: - resp = complex_model_load(model) - - if 'error' in resp: - print(f"❌ {model} FAIL Error: {resp['error']['message']}") - continue - else: - print_basic_model_info(resp) - - ans = generate("0,1,1,2,3,5,8,13,", tokens=2) - - if '21' in ans: - print(f"✅ {model} PASS ({ans})") - else: - print(f"❌ {model} FAIL ({ans})") - - except Exception as e: - print(f"❌ {model} FAIL Exception: {repr(e)}") - - -# 0,1,1,2,3,5,8,13, is the fibonacci sequence, the next number is 21. -# Some results below. -""" $ ./model-api-example.py -Model: 4bit_gpt4-x-alpaca-13b-native-4bit-128g-cuda -Lora(s): [] -truncation_length = 2048 -instruction_template = Alpaca -✅ 4bit_gpt4-x-alpaca-13b-native-4bit-128g-cuda PASS (21) -Model: 4bit_WizardLM-13B-Uncensored-4bit-128g -Lora(s): [] -truncation_length = 2048 -instruction_template = WizardLM -✅ 4bit_WizardLM-13B-Uncensored-4bit-128g PASS (21) -Model: Aeala_VicUnlocked-alpaca-30b-4bit -Lora(s): [] -truncation_length = 2048 -instruction_template = Alpaca -✅ Aeala_VicUnlocked-alpaca-30b-4bit PASS (21) -Model: alpaca-30b-4bit -Lora(s): [] -truncation_length = 2048 -instruction_template = Alpaca -✅ alpaca-30b-4bit PASS (21) -""" diff --git a/spaces/Anni123/AuRoRA/demo_utils.py b/spaces/Anni123/AuRoRA/demo_utils.py deleted file mode 100644 index 8ac7d2b63156d74d47e5242a7a0ccc69d8209b0b..0000000000000000000000000000000000000000 --- a/spaces/Anni123/AuRoRA/demo_utils.py +++ /dev/null @@ -1,35 +0,0 @@ -import os -import json - -def self_construction(datatype): - demo_dir = "./demo_pool/{datatype}_demo".format(datatype=datatype) - - data_dir = "./data_pool/{datatype}".format(datatype=datatype) - if os.path.exists(demo_dir): - print(demo_dir) - if os.path.exists(data_dir): - with open(data_dir, 'r') as f: - for line in f.readlines - -self_construction('strategyqa') - -single_data = { - 'question': "asfawreg", - 'datatype': "dfawds", - 'base_ans': "", - 'base_cots': "", - 'adapter_ans': "", - 'revised_cots': "", - 'retrieved_knowledge': "", - 'feedback': "" - } - -data_dir = "./data_pool/{datatype}".format(datatype="test") -#with open(data_dir, 'a') as f: -# data_json = json.dumps(single_data) -# f.write(data_json + "\n") - -with open(data_dir, 'r') as f: - for line in f.readlines(): - data_dict = json.loads(line) - print(type(data_dict)) \ No newline at end of file diff --git a/spaces/ArdaSaygan/PollGeneratorApp/utils.py b/spaces/ArdaSaygan/PollGeneratorApp/utils.py deleted file mode 100644 index 045efd852772091bcdcad9efb6d98cf02dcc6b5c..0000000000000000000000000000000000000000 --- a/spaces/ArdaSaygan/PollGeneratorApp/utils.py +++ /dev/null @@ -1,57 +0,0 @@ - -import openai -openai.api_key = "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" - -class GPTCompletion: - def __init__( - self, - system="You are a helpful AI assistant", - model="gpt-3.5-turbo", - temperature=1.0, - top_p=1.0, - n=1, - stream=False, - stop=None, - max_tokens=256, - presence_penalty=0.0, - frequency_penalty=0.0, - logit_bias={} - ): - self.system = system - self.model = model - self.messages = [{"role": "system", "content": f"{self.system}"}] - self.temperature = temperature - self.top_p = top_p - self.n = n - self.stream = stream - self.stop = stop - self.max_tokens = max_tokens - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - - - def chatComplete(self, chatHistory, newMessage,firstMessage=""): - - self.messages.append({"role": "user", "content": f"{firstMessage}"}) - for i in range(len(chatHistory)): - self.messages.append({"role": "user", "content": f"{chatHistory[i][0]}"}) - self.messages.append({"role": "assistant", "content": f"{chatHistory[i][1]}"}) - - self.messages.append({"role": "user", "content": f"{newMessage}"}) - - response = openai.ChatCompletion.create( - model=self.model, - messages=self.messages, - temperature=self.temperature, - top_p=self.top_p, - n=self.n, - stream=self.stream, - stop=self.stop, - max_tokens=self.max_tokens, - presence_penalty=self.presence_penalty, - frequency_penalty=self.frequency_penalty, - logit_bias=self.logit_bias - ) - - return response["choices"][0].message["content"].strip() diff --git a/spaces/AriusXi/CodeGenerator/app.py b/spaces/AriusXi/CodeGenerator/app.py deleted file mode 100644 index 11e6c534d288cd70f251351c75ff9ed19021a58f..0000000000000000000000000000000000000000 --- a/spaces/AriusXi/CodeGenerator/app.py +++ /dev/null @@ -1,17 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForCausalLM -import gradio as grad -codegen_tkn = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono") -mdl = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono") - -def codegen(intent): -# give input as text which reflects intent of the program. - text = " write a function which takes 2 numbers as input and returns the larger of the two" - input_ids = codegen_tkn(intent, return_tensors="pt").input_ids - - gen_ids = mdl.generate(input_ids, max_length=128) - response = codegen_tkn.decode(gen_ids[0], skip_special_tokens=True) - return response - -output=grad.Textbox(lines=1, label="Generated Python Code", placeholder="") -inp=grad.Textbox(lines=1, label="Place your intent here") -grad.Interface(codegen, inputs=inp, outputs=output).launch() \ No newline at end of file diff --git a/spaces/Arnx/MusicGenXvAKN/CHANGELOG.md b/spaces/Arnx/MusicGenXvAKN/CHANGELOG.md deleted file mode 100644 index 24fc214df236b40efead4b1585b01632d9658e9b..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/CHANGELOG.md +++ /dev/null @@ -1,23 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.2a] - TBD - -Improved demo, fixed top p (thanks @jnordberg). - -Compressor tanh on output to avoid clipping with some style (especially piano). -Now repeating the conditioning periodically if it is too short. - -More options when launching Gradio app locally (thanks @ashleykleynhans). - -Testing out PyTorch 2.0 memory efficient attention. - -Added extended generation (infinite length) by slowly moving the windows. -Note that other implementations exist: https://github.com/camenduru/MusicGen-colab. - -## [0.0.1] - 2023-06-09 - -Initial release, with model evaluation only. diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/filters/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/filters/__init__.py deleted file mode 100644 index c302a6c0c53d7efa8767bd55da2a73535bea0cbf..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/filters/__init__.py +++ /dev/null @@ -1,940 +0,0 @@ -""" - pygments.filters - ~~~~~~~~~~~~~~~~ - - Module containing filter lookup functions and default - filters. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pip._vendor.pygments.token import String, Comment, Keyword, Name, Error, Whitespace, \ - string_to_tokentype -from pip._vendor.pygments.filter import Filter -from pip._vendor.pygments.util import get_list_opt, get_int_opt, get_bool_opt, \ - get_choice_opt, ClassNotFound, OptionError -from pip._vendor.pygments.plugin import find_plugin_filters - - -def find_filter_class(filtername): - """Lookup a filter by name. Return None if not found.""" - if filtername in FILTERS: - return FILTERS[filtername] - for name, cls in find_plugin_filters(): - if name == filtername: - return cls - return None - - -def get_filter_by_name(filtername, **options): - """Return an instantiated filter. - - Options are passed to the filter initializer if wanted. - Raise a ClassNotFound if not found. - """ - cls = find_filter_class(filtername) - if cls: - return cls(**options) - else: - raise ClassNotFound('filter %r not found' % filtername) - - -def get_all_filters(): - """Return a generator of all filter names.""" - yield from FILTERS - for name, _ in find_plugin_filters(): - yield name - - -def _replace_special(ttype, value, regex, specialttype, - replacefunc=lambda x: x): - last = 0 - for match in regex.finditer(value): - start, end = match.start(), match.end() - if start != last: - yield ttype, value[last:start] - yield specialttype, replacefunc(value[start:end]) - last = end - if last != len(value): - yield ttype, value[last:] - - -class CodeTagFilter(Filter): - """Highlight special code tags in comments and docstrings. - - Options accepted: - - `codetags` : list of strings - A list of strings that are flagged as code tags. The default is to - highlight ``XXX``, ``TODO``, ``FIXME``, ``BUG`` and ``NOTE``. - - .. versionchanged:: 2.13 - Now recognizes ``FIXME`` by default. - """ - - def __init__(self, **options): - Filter.__init__(self, **options) - tags = get_list_opt(options, 'codetags', - ['XXX', 'TODO', 'FIXME', 'BUG', 'NOTE']) - self.tag_re = re.compile(r'\b(%s)\b' % '|'.join([ - re.escape(tag) for tag in tags if tag - ])) - - def filter(self, lexer, stream): - regex = self.tag_re - for ttype, value in stream: - if ttype in String.Doc or \ - ttype in Comment and \ - ttype not in Comment.Preproc: - yield from _replace_special(ttype, value, regex, Comment.Special) - else: - yield ttype, value - - -class SymbolFilter(Filter): - """Convert mathematical symbols such as \\ in Isabelle - or \\longrightarrow in LaTeX into Unicode characters. - - This is mostly useful for HTML or console output when you want to - approximate the source rendering you'd see in an IDE. - - Options accepted: - - `lang` : string - The symbol language. Must be one of ``'isabelle'`` or - ``'latex'``. The default is ``'isabelle'``. - """ - - latex_symbols = { - '\\alpha' : '\U000003b1', - '\\beta' : '\U000003b2', - '\\gamma' : '\U000003b3', - '\\delta' : '\U000003b4', - '\\varepsilon' : '\U000003b5', - '\\zeta' : '\U000003b6', - '\\eta' : '\U000003b7', - '\\vartheta' : '\U000003b8', - '\\iota' : '\U000003b9', - '\\kappa' : '\U000003ba', - '\\lambda' : '\U000003bb', - '\\mu' : '\U000003bc', - '\\nu' : '\U000003bd', - '\\xi' : '\U000003be', - '\\pi' : '\U000003c0', - '\\varrho' : '\U000003c1', - '\\sigma' : '\U000003c3', - '\\tau' : '\U000003c4', - '\\upsilon' : '\U000003c5', - '\\varphi' : '\U000003c6', - '\\chi' : '\U000003c7', - '\\psi' : '\U000003c8', - '\\omega' : '\U000003c9', - '\\Gamma' : '\U00000393', - '\\Delta' : '\U00000394', - '\\Theta' : '\U00000398', - '\\Lambda' : '\U0000039b', - '\\Xi' : '\U0000039e', - '\\Pi' : '\U000003a0', - '\\Sigma' : '\U000003a3', - '\\Upsilon' : '\U000003a5', - '\\Phi' : '\U000003a6', - '\\Psi' : '\U000003a8', - '\\Omega' : '\U000003a9', - '\\leftarrow' : '\U00002190', - '\\longleftarrow' : '\U000027f5', - '\\rightarrow' : '\U00002192', - '\\longrightarrow' : '\U000027f6', - '\\Leftarrow' : '\U000021d0', - '\\Longleftarrow' : '\U000027f8', - '\\Rightarrow' : '\U000021d2', - '\\Longrightarrow' : '\U000027f9', - '\\leftrightarrow' : '\U00002194', - '\\longleftrightarrow' : '\U000027f7', - '\\Leftrightarrow' : '\U000021d4', - '\\Longleftrightarrow' : '\U000027fa', - '\\mapsto' : '\U000021a6', - '\\longmapsto' : '\U000027fc', - '\\relbar' : '\U00002500', - '\\Relbar' : '\U00002550', - '\\hookleftarrow' : '\U000021a9', - '\\hookrightarrow' : '\U000021aa', - '\\leftharpoondown' : '\U000021bd', - '\\rightharpoondown' : '\U000021c1', - '\\leftharpoonup' : '\U000021bc', - '\\rightharpoonup' : '\U000021c0', - '\\rightleftharpoons' : '\U000021cc', - '\\leadsto' : '\U0000219d', - '\\downharpoonleft' : '\U000021c3', - '\\downharpoonright' : '\U000021c2', - '\\upharpoonleft' : '\U000021bf', - '\\upharpoonright' : '\U000021be', - '\\restriction' : '\U000021be', - '\\uparrow' : '\U00002191', - '\\Uparrow' : '\U000021d1', - '\\downarrow' : '\U00002193', - '\\Downarrow' : '\U000021d3', - '\\updownarrow' : '\U00002195', - '\\Updownarrow' : '\U000021d5', - '\\langle' : '\U000027e8', - '\\rangle' : '\U000027e9', - '\\lceil' : '\U00002308', - '\\rceil' : '\U00002309', - '\\lfloor' : '\U0000230a', - '\\rfloor' : '\U0000230b', - '\\flqq' : '\U000000ab', - '\\frqq' : '\U000000bb', - '\\bot' : '\U000022a5', - '\\top' : '\U000022a4', - '\\wedge' : '\U00002227', - '\\bigwedge' : '\U000022c0', - '\\vee' : '\U00002228', - '\\bigvee' : '\U000022c1', - '\\forall' : '\U00002200', - '\\exists' : '\U00002203', - '\\nexists' : '\U00002204', - '\\neg' : '\U000000ac', - '\\Box' : '\U000025a1', - '\\Diamond' : '\U000025c7', - '\\vdash' : '\U000022a2', - '\\models' : '\U000022a8', - '\\dashv' : '\U000022a3', - '\\surd' : '\U0000221a', - '\\le' : '\U00002264', - '\\ge' : '\U00002265', - '\\ll' : '\U0000226a', - '\\gg' : '\U0000226b', - '\\lesssim' : '\U00002272', - '\\gtrsim' : '\U00002273', - '\\lessapprox' : '\U00002a85', - '\\gtrapprox' : '\U00002a86', - '\\in' : '\U00002208', - '\\notin' : '\U00002209', - '\\subset' : '\U00002282', - '\\supset' : '\U00002283', - '\\subseteq' : '\U00002286', - '\\supseteq' : '\U00002287', - '\\sqsubset' : '\U0000228f', - '\\sqsupset' : '\U00002290', - '\\sqsubseteq' : '\U00002291', - '\\sqsupseteq' : '\U00002292', - '\\cap' : '\U00002229', - '\\bigcap' : '\U000022c2', - '\\cup' : '\U0000222a', - '\\bigcup' : '\U000022c3', - '\\sqcup' : '\U00002294', - '\\bigsqcup' : '\U00002a06', - '\\sqcap' : '\U00002293', - '\\Bigsqcap' : '\U00002a05', - '\\setminus' : '\U00002216', - '\\propto' : '\U0000221d', - '\\uplus' : '\U0000228e', - '\\bigplus' : '\U00002a04', - '\\sim' : '\U0000223c', - '\\doteq' : '\U00002250', - '\\simeq' : '\U00002243', - '\\approx' : '\U00002248', - '\\asymp' : '\U0000224d', - '\\cong' : '\U00002245', - '\\equiv' : '\U00002261', - '\\Join' : '\U000022c8', - '\\bowtie' : '\U00002a1d', - '\\prec' : '\U0000227a', - '\\succ' : '\U0000227b', - '\\preceq' : '\U0000227c', - '\\succeq' : '\U0000227d', - '\\parallel' : '\U00002225', - '\\mid' : '\U000000a6', - '\\pm' : '\U000000b1', - '\\mp' : '\U00002213', - '\\times' : '\U000000d7', - '\\div' : '\U000000f7', - '\\cdot' : '\U000022c5', - '\\star' : '\U000022c6', - '\\circ' : '\U00002218', - '\\dagger' : '\U00002020', - '\\ddagger' : '\U00002021', - '\\lhd' : '\U000022b2', - '\\rhd' : '\U000022b3', - '\\unlhd' : '\U000022b4', - '\\unrhd' : '\U000022b5', - '\\triangleleft' : '\U000025c3', - '\\triangleright' : '\U000025b9', - '\\triangle' : '\U000025b3', - '\\triangleq' : '\U0000225c', - '\\oplus' : '\U00002295', - '\\bigoplus' : '\U00002a01', - '\\otimes' : '\U00002297', - '\\bigotimes' : '\U00002a02', - '\\odot' : '\U00002299', - '\\bigodot' : '\U00002a00', - '\\ominus' : '\U00002296', - '\\oslash' : '\U00002298', - '\\dots' : '\U00002026', - '\\cdots' : '\U000022ef', - '\\sum' : '\U00002211', - '\\prod' : '\U0000220f', - '\\coprod' : '\U00002210', - '\\infty' : '\U0000221e', - '\\int' : '\U0000222b', - '\\oint' : '\U0000222e', - '\\clubsuit' : '\U00002663', - '\\diamondsuit' : '\U00002662', - '\\heartsuit' : '\U00002661', - '\\spadesuit' : '\U00002660', - '\\aleph' : '\U00002135', - '\\emptyset' : '\U00002205', - '\\nabla' : '\U00002207', - '\\partial' : '\U00002202', - '\\flat' : '\U0000266d', - '\\natural' : '\U0000266e', - '\\sharp' : '\U0000266f', - '\\angle' : '\U00002220', - '\\copyright' : '\U000000a9', - '\\textregistered' : '\U000000ae', - '\\textonequarter' : '\U000000bc', - '\\textonehalf' : '\U000000bd', - '\\textthreequarters' : '\U000000be', - '\\textordfeminine' : '\U000000aa', - '\\textordmasculine' : '\U000000ba', - '\\euro' : '\U000020ac', - '\\pounds' : '\U000000a3', - '\\yen' : '\U000000a5', - '\\textcent' : '\U000000a2', - '\\textcurrency' : '\U000000a4', - '\\textdegree' : '\U000000b0', - } - - isabelle_symbols = { - '\\' : '\U0001d7ec', - '\\' : '\U0001d7ed', - '\\' : '\U0001d7ee', - '\\' : '\U0001d7ef', - '\\' : '\U0001d7f0', - '\\' : '\U0001d7f1', - '\\' : '\U0001d7f2', - '\\' : '\U0001d7f3', - '\\' : '\U0001d7f4', - '\\' : '\U0001d7f5', - '\\' : '\U0001d49c', - '\\' : '\U0000212c', - '\\' : '\U0001d49e', - '\\' : '\U0001d49f', - '\\' : '\U00002130', - '\\' : '\U00002131', - '\\' : '\U0001d4a2', - '\\' : '\U0000210b', - '\\' : '\U00002110', - '\\' : '\U0001d4a5', - '\\' : '\U0001d4a6', - '\\' : '\U00002112', - '\\' : '\U00002133', - '\\' : '\U0001d4a9', - '\\' : '\U0001d4aa', - '\\

    ' : '\U0001d5c9', - '\\' : '\U0001d5ca', - '\\' : '\U0001d5cb', - '\\' : '\U0001d5cc', - '\\' : '\U0001d5cd', - '\\' : '\U0001d5ce', - '\\' : '\U0001d5cf', - '\\' : '\U0001d5d0', - '\\' : '\U0001d5d1', - '\\' : '\U0001d5d2', - '\\' : '\U0001d5d3', - '\\' : '\U0001d504', - '\\' : '\U0001d505', - '\\' : '\U0000212d', - '\\

    ' : '\U0001d507', - '\\' : '\U0001d508', - '\\' : '\U0001d509', - '\\' : '\U0001d50a', - '\\' : '\U0000210c', - '\\' : '\U00002111', - '\\' : '\U0001d50d', - '\\' : '\U0001d50e', - '\\' : '\U0001d50f', - '\\' : '\U0001d510', - '\\' : '\U0001d511', - '\\' : '\U0001d512', - '\\' : '\U0001d513', - '\\' : '\U0001d514', - '\\' : '\U0000211c', - '\\' : '\U0001d516', - '\\' : '\U0001d517', - '\\' : '\U0001d518', - '\\' : '\U0001d519', - '\\' : '\U0001d51a', - '\\' : '\U0001d51b', - '\\' : '\U0001d51c', - '\\' : '\U00002128', - '\\' : '\U0001d51e', - '\\' : '\U0001d51f', - '\\' : '\U0001d520', - '\\
    ' : '\U0001d521', - '\\' : '\U0001d522', - '\\' : '\U0001d523', - '\\' : '\U0001d524', - '\\' : '\U0001d525', - '\\' : '\U0001d526', - '\\' : '\U0001d527', - '\\' : '\U0001d528', - '\\' : '\U0001d529', - '\\' : '\U0001d52a', - '\\' : '\U0001d52b', - '\\' : '\U0001d52c', - '\\' : '\U0001d52d', - '\\' : '\U0001d52e', - '\\' : '\U0001d52f', - '\\' : '\U0001d530', - '\\' : '\U0001d531', - '\\' : '\U0001d532', - '\\' : '\U0001d533', - '\\' : '\U0001d534', - '\\' : '\U0001d535', - '\\' : '\U0001d536', - '\\' : '\U0001d537', - '\\' : '\U000003b1', - '\\' : '\U000003b2', - '\\' : '\U000003b3', - '\\' : '\U000003b4', - '\\' : '\U000003b5', - '\\' : '\U000003b6', - '\\' : '\U000003b7', - '\\' : '\U000003b8', - '\\' : '\U000003b9', - '\\' : '\U000003ba', - '\\' : '\U000003bb', - '\\' : '\U000003bc', - '\\' : '\U000003bd', - '\\' : '\U000003be', - '\\' : '\U000003c0', - '\\' : '\U000003c1', - '\\' : '\U000003c3', - '\\' : '\U000003c4', - '\\' : '\U000003c5', - '\\' : '\U000003c6', - '\\' : '\U000003c7', - '\\' : '\U000003c8', - '\\' : '\U000003c9', - '\\' : '\U00000393', - '\\' : '\U00000394', - '\\' : '\U00000398', - '\\' : '\U0000039b', - '\\' : '\U0000039e', - '\\' : '\U000003a0', - '\\' : '\U000003a3', - '\\' : '\U000003a5', - '\\' : '\U000003a6', - '\\' : '\U000003a8', - '\\' : '\U000003a9', - '\\' : '\U0001d539', - '\\' : '\U00002102', - '\\' : '\U00002115', - '\\' : '\U0000211a', - '\\' : '\U0000211d', - '\\' : '\U00002124', - '\\' : '\U00002190', - '\\' : '\U000027f5', - '\\' : '\U00002192', - '\\' : '\U000027f6', - '\\' : '\U000021d0', - '\\' : '\U000027f8', - '\\' : '\U000021d2', - '\\' : '\U000027f9', - '\\' : '\U00002194', - '\\' : '\U000027f7', - '\\' : '\U000021d4', - '\\' : '\U000027fa', - '\\' : '\U000021a6', - '\\' : '\U000027fc', - '\\' : '\U00002500', - '\\' : '\U00002550', - '\\' : '\U000021a9', - '\\' : '\U000021aa', - '\\' : '\U000021bd', - '\\' : '\U000021c1', - '\\' : '\U000021bc', - '\\' : '\U000021c0', - '\\' : '\U000021cc', - '\\' : '\U0000219d', - '\\' : '\U000021c3', - '\\' : '\U000021c2', - '\\' : '\U000021bf', - '\\' : '\U000021be', - '\\' : '\U000021be', - '\\' : '\U00002237', - '\\' : '\U00002191', - '\\' : '\U000021d1', - '\\' : '\U00002193', - '\\' : '\U000021d3', - '\\' : '\U00002195', - '\\' : '\U000021d5', - '\\' : '\U000027e8', - '\\' : '\U000027e9', - '\\' : '\U00002308', - '\\' : '\U00002309', - '\\' : '\U0000230a', - '\\' : '\U0000230b', - '\\' : '\U00002987', - '\\' : '\U00002988', - '\\' : '\U000027e6', - '\\' : '\U000027e7', - '\\' : '\U00002983', - '\\' : '\U00002984', - '\\' : '\U000000ab', - '\\' : '\U000000bb', - '\\' : '\U000022a5', - '\\' : '\U000022a4', - '\\' : '\U00002227', - '\\' : '\U000022c0', - '\\' : '\U00002228', - '\\' : '\U000022c1', - '\\' : '\U00002200', - '\\' : '\U00002203', - '\\' : '\U00002204', - '\\' : '\U000000ac', - '\\' : '\U000025a1', - '\\' : '\U000025c7', - '\\' : '\U000022a2', - '\\' : '\U000022a8', - '\\' : '\U000022a9', - '\\' : '\U000022ab', - '\\' : '\U000022a3', - '\\' : '\U0000221a', - '\\' : '\U00002264', - '\\' : '\U00002265', - '\\' : '\U0000226a', - '\\' : '\U0000226b', - '\\' : '\U00002272', - '\\' : '\U00002273', - '\\' : '\U00002a85', - '\\' : '\U00002a86', - '\\' : '\U00002208', - '\\' : '\U00002209', - '\\' : '\U00002282', - '\\' : '\U00002283', - '\\' : '\U00002286', - '\\' : '\U00002287', - '\\' : '\U0000228f', - '\\' : '\U00002290', - '\\' : '\U00002291', - '\\' : '\U00002292', - '\\' : '\U00002229', - '\\' : '\U000022c2', - '\\' : '\U0000222a', - '\\' : '\U000022c3', - '\\' : '\U00002294', - '\\' : '\U00002a06', - '\\' : '\U00002293', - '\\' : '\U00002a05', - '\\' : '\U00002216', - '\\' : '\U0000221d', - '\\' : '\U0000228e', - '\\' : '\U00002a04', - '\\' : '\U00002260', - '\\' : '\U0000223c', - '\\' : '\U00002250', - '\\' : '\U00002243', - '\\' : '\U00002248', - '\\' : '\U0000224d', - '\\' : '\U00002245', - '\\' : '\U00002323', - '\\' : '\U00002261', - '\\' : '\U00002322', - '\\' : '\U000022c8', - '\\' : '\U00002a1d', - '\\' : '\U0000227a', - '\\' : '\U0000227b', - '\\' : '\U0000227c', - '\\' : '\U0000227d', - '\\' : '\U00002225', - '\\' : '\U000000a6', - '\\' : '\U000000b1', - '\\' : '\U00002213', - '\\' : '\U000000d7', - '\\
    ' : '\U000000f7', - '\\' : '\U000022c5', - '\\' : '\U000022c6', - '\\' : '\U00002219', - '\\' : '\U00002218', - '\\' : '\U00002020', - '\\' : '\U00002021', - '\\' : '\U000022b2', - '\\' : '\U000022b3', - '\\' : '\U000022b4', - '\\' : '\U000022b5', - '\\' : '\U000025c3', - '\\' : '\U000025b9', - '\\' : '\U000025b3', - '\\' : '\U0000225c', - '\\' : '\U00002295', - '\\' : '\U00002a01', - '\\' : '\U00002297', - '\\' : '\U00002a02', - '\\' : '\U00002299', - '\\' : '\U00002a00', - '\\' : '\U00002296', - '\\' : '\U00002298', - '\\' : '\U00002026', - '\\' : '\U000022ef', - '\\' : '\U00002211', - '\\' : '\U0000220f', - '\\' : '\U00002210', - '\\' : '\U0000221e', - '\\' : '\U0000222b', - '\\' : '\U0000222e', - '\\' : '\U00002663', - '\\' : '\U00002662', - '\\' : '\U00002661', - '\\' : '\U00002660', - '\\' : '\U00002135', - '\\' : '\U00002205', - '\\' : '\U00002207', - '\\' : '\U00002202', - '\\' : '\U0000266d', - '\\' : '\U0000266e', - '\\' : '\U0000266f', - '\\' : '\U00002220', - '\\' : '\U000000a9', - '\\' : '\U000000ae', - '\\' : '\U000000ad', - '\\' : '\U000000af', - '\\' : '\U000000bc', - '\\' : '\U000000bd', - '\\' : '\U000000be', - '\\' : '\U000000aa', - '\\' : '\U000000ba', - '\\
    ' : '\U000000a7', - '\\' : '\U000000b6', - '\\' : '\U000000a1', - '\\' : '\U000000bf', - '\\' : '\U000020ac', - '\\' : '\U000000a3', - '\\' : '\U000000a5', - '\\' : '\U000000a2', - '\\' : '\U000000a4', - '\\' : '\U000000b0', - '\\' : '\U00002a3f', - '\\' : '\U00002127', - '\\' : '\U000025ca', - '\\' : '\U00002118', - '\\' : '\U00002240', - '\\' : '\U000022c4', - '\\' : '\U000000b4', - '\\' : '\U00000131', - '\\' : '\U000000a8', - '\\' : '\U000000b8', - '\\' : '\U000002dd', - '\\' : '\U000003f5', - '\\' : '\U000023ce', - '\\' : '\U00002039', - '\\' : '\U0000203a', - '\\' : '\U00002302', - '\\<^sub>' : '\U000021e9', - '\\<^sup>' : '\U000021e7', - '\\<^bold>' : '\U00002759', - '\\<^bsub>' : '\U000021d8', - '\\<^esub>' : '\U000021d9', - '\\<^bsup>' : '\U000021d7', - '\\<^esup>' : '\U000021d6', - } - - lang_map = {'isabelle' : isabelle_symbols, 'latex' : latex_symbols} - - def __init__(self, **options): - Filter.__init__(self, **options) - lang = get_choice_opt(options, 'lang', - ['isabelle', 'latex'], 'isabelle') - self.symbols = self.lang_map[lang] - - def filter(self, lexer, stream): - for ttype, value in stream: - if value in self.symbols: - yield ttype, self.symbols[value] - else: - yield ttype, value - - -class KeywordCaseFilter(Filter): - """Convert keywords to lowercase or uppercase or capitalize them, which - means first letter uppercase, rest lowercase. - - This can be useful e.g. if you highlight Pascal code and want to adapt the - code to your styleguide. - - Options accepted: - - `case` : string - The casing to convert keywords to. Must be one of ``'lower'``, - ``'upper'`` or ``'capitalize'``. The default is ``'lower'``. - """ - - def __init__(self, **options): - Filter.__init__(self, **options) - case = get_choice_opt(options, 'case', - ['lower', 'upper', 'capitalize'], 'lower') - self.convert = getattr(str, case) - - def filter(self, lexer, stream): - for ttype, value in stream: - if ttype in Keyword: - yield ttype, self.convert(value) - else: - yield ttype, value - - -class NameHighlightFilter(Filter): - """Highlight a normal Name (and Name.*) token with a different token type. - - Example:: - - filter = NameHighlightFilter( - names=['foo', 'bar', 'baz'], - tokentype=Name.Function, - ) - - This would highlight the names "foo", "bar" and "baz" - as functions. `Name.Function` is the default token type. - - Options accepted: - - `names` : list of strings - A list of names that should be given the different token type. - There is no default. - `tokentype` : TokenType or string - A token type or a string containing a token type name that is - used for highlighting the strings in `names`. The default is - `Name.Function`. - """ - - def __init__(self, **options): - Filter.__init__(self, **options) - self.names = set(get_list_opt(options, 'names', [])) - tokentype = options.get('tokentype') - if tokentype: - self.tokentype = string_to_tokentype(tokentype) - else: - self.tokentype = Name.Function - - def filter(self, lexer, stream): - for ttype, value in stream: - if ttype in Name and value in self.names: - yield self.tokentype, value - else: - yield ttype, value - - -class ErrorToken(Exception): - pass - - -class RaiseOnErrorTokenFilter(Filter): - """Raise an exception when the lexer generates an error token. - - Options accepted: - - `excclass` : Exception class - The exception class to raise. - The default is `pygments.filters.ErrorToken`. - - .. versionadded:: 0.8 - """ - - def __init__(self, **options): - Filter.__init__(self, **options) - self.exception = options.get('excclass', ErrorToken) - try: - # issubclass() will raise TypeError if first argument is not a class - if not issubclass(self.exception, Exception): - raise TypeError - except TypeError: - raise OptionError('excclass option is not an exception class') - - def filter(self, lexer, stream): - for ttype, value in stream: - if ttype is Error: - raise self.exception(value) - yield ttype, value - - -class VisibleWhitespaceFilter(Filter): - """Convert tabs, newlines and/or spaces to visible characters. - - Options accepted: - - `spaces` : string or bool - If this is a one-character string, spaces will be replaces by this string. - If it is another true value, spaces will be replaced by ``·`` (unicode - MIDDLE DOT). If it is a false value, spaces will not be replaced. The - default is ``False``. - `tabs` : string or bool - The same as for `spaces`, but the default replacement character is ``»`` - (unicode RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK). The default value - is ``False``. Note: this will not work if the `tabsize` option for the - lexer is nonzero, as tabs will already have been expanded then. - `tabsize` : int - If tabs are to be replaced by this filter (see the `tabs` option), this - is the total number of characters that a tab should be expanded to. - The default is ``8``. - `newlines` : string or bool - The same as for `spaces`, but the default replacement character is ``¶`` - (unicode PILCROW SIGN). The default value is ``False``. - `wstokentype` : bool - If true, give whitespace the special `Whitespace` token type. This allows - styling the visible whitespace differently (e.g. greyed out), but it can - disrupt background colors. The default is ``True``. - - .. versionadded:: 0.8 - """ - - def __init__(self, **options): - Filter.__init__(self, **options) - for name, default in [('spaces', '·'), - ('tabs', '»'), - ('newlines', '¶')]: - opt = options.get(name, False) - if isinstance(opt, str) and len(opt) == 1: - setattr(self, name, opt) - else: - setattr(self, name, (opt and default or '')) - tabsize = get_int_opt(options, 'tabsize', 8) - if self.tabs: - self.tabs += ' ' * (tabsize - 1) - if self.newlines: - self.newlines += '\n' - self.wstt = get_bool_opt(options, 'wstokentype', True) - - def filter(self, lexer, stream): - if self.wstt: - spaces = self.spaces or ' ' - tabs = self.tabs or '\t' - newlines = self.newlines or '\n' - regex = re.compile(r'\s') - - def replacefunc(wschar): - if wschar == ' ': - return spaces - elif wschar == '\t': - return tabs - elif wschar == '\n': - return newlines - return wschar - - for ttype, value in stream: - yield from _replace_special(ttype, value, regex, Whitespace, - replacefunc) - else: - spaces, tabs, newlines = self.spaces, self.tabs, self.newlines - # simpler processing - for ttype, value in stream: - if spaces: - value = value.replace(' ', spaces) - if tabs: - value = value.replace('\t', tabs) - if newlines: - value = value.replace('\n', newlines) - yield ttype, value - - -class GobbleFilter(Filter): - """Gobbles source code lines (eats initial characters). - - This filter drops the first ``n`` characters off every line of code. This - may be useful when the source code fed to the lexer is indented by a fixed - amount of space that isn't desired in the output. - - Options accepted: - - `n` : int - The number of characters to gobble. - - .. versionadded:: 1.2 - """ - def __init__(self, **options): - Filter.__init__(self, **options) - self.n = get_int_opt(options, 'n', 0) - - def gobble(self, value, left): - if left < len(value): - return value[left:], 0 - else: - return '', left - len(value) - - def filter(self, lexer, stream): - n = self.n - left = n # How many characters left to gobble. - for ttype, value in stream: - # Remove ``left`` tokens from first line, ``n`` from all others. - parts = value.split('\n') - (parts[0], left) = self.gobble(parts[0], left) - for i in range(1, len(parts)): - (parts[i], left) = self.gobble(parts[i], n) - value = '\n'.join(parts) - - if value != '': - yield ttype, value - - -class TokenMergeFilter(Filter): - """Merges consecutive tokens with the same token type in the output - stream of a lexer. - - .. versionadded:: 1.2 - """ - def __init__(self, **options): - Filter.__init__(self, **options) - - def filter(self, lexer, stream): - current_type = None - current_value = None - for ttype, value in stream: - if ttype is current_type: - current_value += value - else: - if current_type is not None: - yield current_type, current_value - current_type = ttype - current_value = value - if current_type is not None: - yield current_type, current_value - - -FILTERS = { - 'codetagify': CodeTagFilter, - 'keywordcase': KeywordCaseFilter, - 'highlight': NameHighlightFilter, - 'raiseonerror': RaiseOnErrorTokenFilter, - 'whitespace': VisibleWhitespaceFilter, - 'gobble': GobbleFilter, - 'tokenmerge': TokenMergeFilter, - 'symbols': SymbolFilter, -} diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_loop.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_loop.py deleted file mode 100644 index 01c6cafbe53f1fcb12f7b382b2b35e2fd2c69933..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_loop.py +++ /dev/null @@ -1,43 +0,0 @@ -from typing import Iterable, Tuple, TypeVar - -T = TypeVar("T") - - -def loop_first(values: Iterable[T]) -> Iterable[Tuple[bool, T]]: - """Iterate and generate a tuple with a flag for first value.""" - iter_values = iter(values) - try: - value = next(iter_values) - except StopIteration: - return - yield True, value - for value in iter_values: - yield False, value - - -def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]: - """Iterate and generate a tuple with a flag for last value.""" - iter_values = iter(values) - try: - previous_value = next(iter_values) - except StopIteration: - return - for value in iter_values: - yield False, previous_value - previous_value = value - yield True, previous_value - - -def loop_first_last(values: Iterable[T]) -> Iterable[Tuple[bool, bool, T]]: - """Iterate and generate a tuple with a flag for first and last value.""" - iter_values = iter(values) - try: - previous_value = next(iter_values) - except StopIteration: - return - first = True - for value in iter_values: - yield first, False, previous_value - first = False - previous_value = value - yield first, True, previous_value diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/cityscapes_evaluation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/cityscapes_evaluation.py deleted file mode 100644 index 3fb6c4cd5f752d639570d022cb23ce18491c370a..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/cityscapes_evaluation.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import glob -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -import torch -from PIL import Image - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class CityscapesEvaluator(DatasetEvaluator): - """ - Base class for evaluation using cityscapes API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): the name of the dataset. - It must have the following metadata associated with it: - "thing_classes", "gt_dir". - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") - self._temp_dir = self._working_dir.name - # All workers will write to the same results directory - # TODO this does not work in distributed training - self._temp_dir = comm.all_gather(self._temp_dir)[0] - if self._temp_dir != self._working_dir.name: - self._working_dir.cleanup() - self._logger.info( - "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) - ) - - -class CityscapesInstanceEvaluator(CityscapesEvaluator): - """ - Evaluate instance segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import name2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") - - if "instances" in output: - output = output["instances"].to(self._cpu_device) - num_instances = len(output) - with open(pred_txt, "w") as fout: - for i in range(num_instances): - pred_class = output.pred_classes[i] - classes = self._metadata.thing_classes[pred_class] - class_id = name2label[classes].id - score = output.scores[i] - mask = output.pred_masks[i].numpy().astype("uint8") - png_filename = os.path.join( - self._temp_dir, basename + "_{}_{}.png".format(i, classes) - ) - - Image.fromarray(mask * 255).save(png_filename) - fout.write( - "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) - ) - else: - # Cityscapes requires a prediction file for every ground truth image. - with open(pred_txt, "w") as fout: - pass - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP" and "AP50". - """ - comm.synchronize() - if comm.get_rank() > 0: - return - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - )["averages"] - - ret = OrderedDict() - ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} - self._working_dir.cleanup() - return ret - - -class CityscapesSemSegEvaluator(CityscapesEvaluator): - """ - Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import trainId2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") - - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() - pred = 255 * np.ones(output.shape, dtype=np.uint8) - for train_id, label in trainId2label.items(): - if label.ignoreInEval: - continue - pred[output == train_id] = label.id - Image.fromarray(pred).save(pred_filename) - - def evaluate(self): - comm.synchronize() - if comm.get_rank() > 0: - return - # Load the Cityscapes eval script *after* setting the required env var, - # since the script reads CITYSCAPES_DATASET into global variables at load time. - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - ) - ret = OrderedDict() - ret["sem_seg"] = { - "IoU": 100.0 * results["averageScoreClasses"], - "iIoU": 100.0 * results["averageScoreInstClasses"], - "IoU_sup": 100.0 * results["averageScoreCategories"], - "iIoU_sup": 100.0 * results["averageScoreInstCategories"], - } - self._working_dir.cleanup() - return ret diff --git a/spaces/Banbri/zcvzcv/src/app/globals.css b/spaces/Banbri/zcvzcv/src/app/globals.css deleted file mode 100644 index 0cf8023e33e241d7b2da5b5e66475196c2127001..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/globals.css +++ /dev/null @@ -1,39 +0,0 @@ -@tailwind base; -@tailwind components; -@tailwind utilities; - -:root { - --foreground-rgb: 0, 0, 0; - --background-start-rgb: 214, 219, 220; - --background-end-rgb: 255, 255, 255; -} - -@media (prefers-color-scheme: dark) { - :root { - --foreground-rgb: 255, 255, 255; - --background-start-rgb: 0, 0, 0; - --background-end-rgb: 0, 0, 0; - } -} - -body { - color: rgb(var(--foreground-rgb)); - background: linear-gradient( - to bottom, - transparent, - rgb(var(--background-end-rgb)) - ) - rgb(var(--background-start-rgb)); -} - - -/* this is the trick to bypass the style={{}} attribute when printing */ -@media print { - .comic-page[style] { width: 100vw !important; } -} - - -.render-to-image .comic-panel { - height: auto !important; - /* max-width: fit-content !important; */ -} diff --git a/spaces/Banbri/zcvzcv/src/app/queries/getStyle.ts b/spaces/Banbri/zcvzcv/src/app/queries/getStyle.ts deleted file mode 100644 index 85c75b209b87382bef6996126d1ba8f0bf48ee32..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/queries/getStyle.ts +++ /dev/null @@ -1,52 +0,0 @@ -import { createLlamaPrompt } from "@/lib/createLlamaPrompt" - -import { predict } from "./predict" -import { Preset } from "../engine/presets" - -export const getStory = async ({ - preset, - prompt = "", -}: { - preset: Preset; - prompt: string; -}) => { - - const query = createLlamaPrompt([ - { - role: "system", - content: [ - `You are a comic book author specialized in ${preset.llmPrompt}`, - `You are going to be asked to write a comic book page, your mission is to answer a JSON array containing 4 items, to describe the page (one item per panel).`, - `Each array item should be a comic book panel caption the describe the environment, era, characters, objects, textures, lighting.`, - `Be brief in your caption don't add your own comments. Be straight to the point, and never reply things like "Sure, I can.." etc.` - ].filter(item => item).join("\n") - }, - { - role: "user", - content: `The story is: ${prompt}`, - } - ]) - - - let result = "" - try { - result = `${await predict(query) || ""}`.trim() - if (!result.length) { - throw new Error("empty result!") - } - } catch (err) { - console.log(`prediction of the story failed, trying again..`) - try { - result = `${await predict(query+".") || ""}`.trim() - if (!result.length) { - throw new Error("empty result!") - } - } catch (err) { - console.error(`prediction of the story failed again!`) - throw new Error(`failed to generate the story ${err}`) - } - } - - const tmp = result // result.split("Caption:").pop() || result - return tmp.replaceAll("\n", ", ") -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/utils/backups.py b/spaces/Bart92/RVC_HF/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/lpips.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/lpips.py deleted file mode 100644 index 86a00ba45653975c1b5e542388509b8a5a344730..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/lpips.py +++ /dev/null @@ -1,123 +0,0 @@ -"""Stripped version of https://github.com/richzhang/PerceptualSimilarity/tree/master/models""" - -import torch -import torch.nn as nn -from torchvision import models -from collections import namedtuple - -from taming.util import get_ckpt_path - - -class LPIPS(nn.Module): - # Learned perceptual metric - def __init__(self, use_dropout=True): - super().__init__() - self.scaling_layer = ScalingLayer() - self.chns = [64, 128, 256, 512, 512] # vg16 features - self.net = vgg16(pretrained=True, requires_grad=False) - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.load_from_pretrained() - for param in self.parameters(): - param.requires_grad = False - - def load_from_pretrained(self, name="vgg_lpips"): - ckpt = get_ckpt_path(name, "taming/modules/autoencoder/lpips") - self.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) - print("loaded pretrained LPIPS loss from {}".format(ckpt)) - - @classmethod - def from_pretrained(cls, name="vgg_lpips"): - if name is not "vgg_lpips": - raise NotImplementedError - model = cls() - ckpt = get_ckpt_path(name) - model.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) - return model - - def forward(self, input, target): - in0_input, in1_input = (self.scaling_layer(input), self.scaling_layer(target)) - outs0, outs1 = self.net(in0_input), self.net(in1_input) - feats0, feats1, diffs = {}, {}, {} - lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4] - for kk in range(len(self.chns)): - feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk] - feats1[kk]) ** 2 - - res = [spatial_average(lins[kk].model(diffs[kk]), keepdim=True) for kk in range(len(self.chns))] - val = res[0] - for l in range(1, len(self.chns)): - val += res[l] - return val - - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer('scale', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - - -class NetLinLayer(nn.Module): - """ A single linear layer which does a 1x1 conv """ - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - layers = [nn.Dropout(), ] if (use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ] - self.model = nn.Sequential(*layers) - - -class vgg16(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(vgg16, self).__init__() - vgg_pretrained_features = models.vgg16(pretrained=pretrained).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(4): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(4, 9): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(9, 16): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(16, 23): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(23, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1_2 = h - h = self.slice2(h) - h_relu2_2 = h - h = self.slice3(h) - h_relu3_3 = h - h = self.slice4(h) - h_relu4_3 = h - h = self.slice5(h) - h_relu5_3 = h - vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3']) - out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3) - return out - - -def normalize_tensor(x,eps=1e-10): - norm_factor = torch.sqrt(torch.sum(x**2,dim=1,keepdim=True)) - return x/(norm_factor+eps) - - -def spatial_average(x, keepdim=True): - return x.mean([2,3],keepdim=keepdim) - diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/conditions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/conditions.py deleted file mode 100644 index 442f11c4cd99fe5b3d7c411251ebd548b49a82a7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/conditions.py +++ /dev/null @@ -1,462 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import re -from collections import namedtuple - -from boto3.exceptions import ( - DynamoDBNeedsConditionError, - DynamoDBNeedsKeyConditionError, - DynamoDBOperationNotSupportedError, -) - -ATTR_NAME_REGEX = re.compile(r'[^.\[\]]+(?![^\[]*\])') - - -class ConditionBase: - - expression_format = '' - expression_operator = '' - has_grouped_values = False - - def __init__(self, *values): - self._values = values - - def __and__(self, other): - if not isinstance(other, ConditionBase): - raise DynamoDBOperationNotSupportedError('AND', other) - return And(self, other) - - def __or__(self, other): - if not isinstance(other, ConditionBase): - raise DynamoDBOperationNotSupportedError('OR', other) - return Or(self, other) - - def __invert__(self): - return Not(self) - - def get_expression(self): - return { - 'format': self.expression_format, - 'operator': self.expression_operator, - 'values': self._values, - } - - def __eq__(self, other): - if isinstance(other, type(self)): - if self._values == other._values: - return True - return False - - def __ne__(self, other): - return not self.__eq__(other) - - -class AttributeBase: - def __init__(self, name): - self.name = name - - def __and__(self, value): - raise DynamoDBOperationNotSupportedError('AND', self) - - def __or__(self, value): - raise DynamoDBOperationNotSupportedError('OR', self) - - def __invert__(self): - raise DynamoDBOperationNotSupportedError('NOT', self) - - def eq(self, value): - """Creates a condition where the attribute is equal to the value. - - :param value: The value that the attribute is equal to. - """ - return Equals(self, value) - - def lt(self, value): - """Creates a condition where the attribute is less than the value. - - :param value: The value that the attribute is less than. - """ - return LessThan(self, value) - - def lte(self, value): - """Creates a condition where the attribute is less than or equal to the - value. - - :param value: The value that the attribute is less than or equal to. - """ - return LessThanEquals(self, value) - - def gt(self, value): - """Creates a condition where the attribute is greater than the value. - - :param value: The value that the attribute is greater than. - """ - return GreaterThan(self, value) - - def gte(self, value): - """Creates a condition where the attribute is greater than or equal to - the value. - - :param value: The value that the attribute is greater than or equal to. - """ - return GreaterThanEquals(self, value) - - def begins_with(self, value): - """Creates a condition where the attribute begins with the value. - - :param value: The value that the attribute begins with. - """ - return BeginsWith(self, value) - - def between(self, low_value, high_value): - """Creates a condition where the attribute is greater than or equal - to the low value and less than or equal to the high value. - - :param low_value: The value that the attribute is greater than or equal to. - :param high_value: The value that the attribute is less than or equal to. - """ - return Between(self, low_value, high_value) - - def __eq__(self, other): - return isinstance(other, type(self)) and self.name == other.name - - def __ne__(self, other): - return not self.__eq__(other) - - -class ConditionAttributeBase(ConditionBase, AttributeBase): - """This base class is for conditions that can have attribute methods. - - One example is the Size condition. To complete a condition, you need - to apply another AttributeBase method like eq(). - """ - - def __init__(self, *values): - ConditionBase.__init__(self, *values) - # This is assuming the first value to the condition is the attribute - # in which can be used to generate its attribute base. - AttributeBase.__init__(self, values[0].name) - - def __eq__(self, other): - return ConditionBase.__eq__(self, other) and AttributeBase.__eq__( - self, other - ) - - def __ne__(self, other): - return not self.__eq__(other) - - -class ComparisonCondition(ConditionBase): - expression_format = '{0} {operator} {1}' - - -class Equals(ComparisonCondition): - expression_operator = '=' - - -class NotEquals(ComparisonCondition): - expression_operator = '<>' - - -class LessThan(ComparisonCondition): - expression_operator = '<' - - -class LessThanEquals(ComparisonCondition): - expression_operator = '<=' - - -class GreaterThan(ComparisonCondition): - expression_operator = '>' - - -class GreaterThanEquals(ComparisonCondition): - expression_operator = '>=' - - -class In(ComparisonCondition): - expression_operator = 'IN' - has_grouped_values = True - - -class Between(ConditionBase): - expression_operator = 'BETWEEN' - expression_format = '{0} {operator} {1} AND {2}' - - -class BeginsWith(ConditionBase): - expression_operator = 'begins_with' - expression_format = '{operator}({0}, {1})' - - -class Contains(ConditionBase): - expression_operator = 'contains' - expression_format = '{operator}({0}, {1})' - - -class Size(ConditionAttributeBase): - expression_operator = 'size' - expression_format = '{operator}({0})' - - -class AttributeType(ConditionBase): - expression_operator = 'attribute_type' - expression_format = '{operator}({0}, {1})' - - -class AttributeExists(ConditionBase): - expression_operator = 'attribute_exists' - expression_format = '{operator}({0})' - - -class AttributeNotExists(ConditionBase): - expression_operator = 'attribute_not_exists' - expression_format = '{operator}({0})' - - -class And(ConditionBase): - expression_operator = 'AND' - expression_format = '({0} {operator} {1})' - - -class Or(ConditionBase): - expression_operator = 'OR' - expression_format = '({0} {operator} {1})' - - -class Not(ConditionBase): - expression_operator = 'NOT' - expression_format = '({operator} {0})' - - -class Key(AttributeBase): - pass - - -class Attr(AttributeBase): - """Represents an DynamoDB item's attribute.""" - - def ne(self, value): - """Creates a condition where the attribute is not equal to the value - - :param value: The value that the attribute is not equal to. - """ - return NotEquals(self, value) - - def is_in(self, value): - """Creates a condition where the attribute is in the value, - - :type value: list - :param value: The value that the attribute is in. - """ - return In(self, value) - - def exists(self): - """Creates a condition where the attribute exists.""" - return AttributeExists(self) - - def not_exists(self): - """Creates a condition where the attribute does not exist.""" - return AttributeNotExists(self) - - def contains(self, value): - """Creates a condition where the attribute contains the value. - - :param value: The value the attribute contains. - """ - return Contains(self, value) - - def size(self): - """Creates a condition for the attribute size. - - Note another AttributeBase method must be called on the returned - size condition to be a valid DynamoDB condition. - """ - return Size(self) - - def attribute_type(self, value): - """Creates a condition for the attribute type. - - :param value: The type of the attribute. - """ - return AttributeType(self, value) - - -BuiltConditionExpression = namedtuple( - 'BuiltConditionExpression', - [ - 'condition_expression', - 'attribute_name_placeholders', - 'attribute_value_placeholders', - ], -) - - -class ConditionExpressionBuilder: - """This class is used to build condition expressions with placeholders""" - - def __init__(self): - self._name_count = 0 - self._value_count = 0 - self._name_placeholder = 'n' - self._value_placeholder = 'v' - - def _get_name_placeholder(self): - return '#' + self._name_placeholder + str(self._name_count) - - def _get_value_placeholder(self): - return ':' + self._value_placeholder + str(self._value_count) - - def reset(self): - """Resets the placeholder name and values""" - self._name_count = 0 - self._value_count = 0 - - def build_expression(self, condition, is_key_condition=False): - """Builds the condition expression and the dictionary of placeholders. - - :type condition: ConditionBase - :param condition: A condition to be built into a condition expression - string with any necessary placeholders. - - :type is_key_condition: Boolean - :param is_key_condition: True if the expression is for a - KeyConditionExpression. False otherwise. - - :rtype: (string, dict, dict) - :returns: Will return a string representing the condition with - placeholders inserted where necessary, a dictionary of - placeholders for attribute names, and a dictionary of - placeholders for attribute values. Here is a sample return value: - - ('#n0 = :v0', {'#n0': 'myattribute'}, {':v1': 'myvalue'}) - """ - if not isinstance(condition, ConditionBase): - raise DynamoDBNeedsConditionError(condition) - attribute_name_placeholders = {} - attribute_value_placeholders = {} - condition_expression = self._build_expression( - condition, - attribute_name_placeholders, - attribute_value_placeholders, - is_key_condition=is_key_condition, - ) - return BuiltConditionExpression( - condition_expression=condition_expression, - attribute_name_placeholders=attribute_name_placeholders, - attribute_value_placeholders=attribute_value_placeholders, - ) - - def _build_expression( - self, - condition, - attribute_name_placeholders, - attribute_value_placeholders, - is_key_condition, - ): - expression_dict = condition.get_expression() - replaced_values = [] - for value in expression_dict['values']: - # Build the necessary placeholders for that value. - # Placeholders are built for both attribute names and values. - replaced_value = self._build_expression_component( - value, - attribute_name_placeholders, - attribute_value_placeholders, - condition.has_grouped_values, - is_key_condition, - ) - replaced_values.append(replaced_value) - # Fill out the expression using the operator and the - # values that have been replaced with placeholders. - return expression_dict['format'].format( - *replaced_values, operator=expression_dict['operator'] - ) - - def _build_expression_component( - self, - value, - attribute_name_placeholders, - attribute_value_placeholders, - has_grouped_values, - is_key_condition, - ): - # Continue to recurse if the value is a ConditionBase in order - # to extract out all parts of the expression. - if isinstance(value, ConditionBase): - return self._build_expression( - value, - attribute_name_placeholders, - attribute_value_placeholders, - is_key_condition, - ) - # If it is not a ConditionBase, we can recurse no further. - # So we check if it is an attribute and add placeholders for - # its name - elif isinstance(value, AttributeBase): - if is_key_condition and not isinstance(value, Key): - raise DynamoDBNeedsKeyConditionError( - f'Attribute object {value.name} is of type {type(value)}. ' - f'KeyConditionExpression only supports Attribute objects ' - f'of type Key' - ) - return self._build_name_placeholder( - value, attribute_name_placeholders - ) - # If it is anything else, we treat it as a value and thus placeholders - # are needed for the value. - else: - return self._build_value_placeholder( - value, attribute_value_placeholders, has_grouped_values - ) - - def _build_name_placeholder(self, value, attribute_name_placeholders): - attribute_name = value.name - # Figure out which parts of the attribute name that needs replacement. - attribute_name_parts = ATTR_NAME_REGEX.findall(attribute_name) - - # Add a temporary placeholder for each of these parts. - placeholder_format = ATTR_NAME_REGEX.sub('%s', attribute_name) - str_format_args = [] - for part in attribute_name_parts: - name_placeholder = self._get_name_placeholder() - self._name_count += 1 - str_format_args.append(name_placeholder) - # Add the placeholder and value to dictionary of name placeholders. - attribute_name_placeholders[name_placeholder] = part - # Replace the temporary placeholders with the designated placeholders. - return placeholder_format % tuple(str_format_args) - - def _build_value_placeholder( - self, value, attribute_value_placeholders, has_grouped_values=False - ): - # If the values are grouped, we need to add a placeholder for - # each element inside of the actual value. - if has_grouped_values: - placeholder_list = [] - for v in value: - value_placeholder = self._get_value_placeholder() - self._value_count += 1 - placeholder_list.append(value_placeholder) - attribute_value_placeholders[value_placeholder] = v - # Assuming the values are grouped by parenthesis. - # IN is the currently the only one that uses this so it maybe - # needed to be changed in future. - return '(' + ', '.join(placeholder_list) + ')' - # Otherwise, treat the value as a single value that needs only - # one placeholder. - else: - value_placeholder = self._get_value_placeholder() - self._value_count += 1 - attribute_value_placeholders[value_placeholder] = value - return value_placeholder diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/__init__.py deleted file mode 100644 index 6c24cc2b30421bad1cb5f8ca525bc42b57ad9761..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/__init__.py +++ /dev/null @@ -1,247 +0,0 @@ -"""Extensions to the 'distutils' for large or complex distributions""" - -import functools -import os -import re -import warnings - -import _distutils_hack.override # noqa: F401 - -import distutils.core -from distutils.errors import DistutilsOptionError -from distutils.util import convert_path as _convert_path - -from ._deprecation_warning import SetuptoolsDeprecationWarning - -import setuptools.version -from setuptools.extension import Extension -from setuptools.dist import Distribution -from setuptools.depends import Require -from setuptools.discovery import PackageFinder, PEP420PackageFinder -from . import monkey -from . import logging - - -__all__ = [ - 'setup', - 'Distribution', - 'Command', - 'Extension', - 'Require', - 'SetuptoolsDeprecationWarning', - 'find_packages', - 'find_namespace_packages', -] - -__version__ = setuptools.version.__version__ - -bootstrap_install_from = None - - -find_packages = PackageFinder.find -find_namespace_packages = PEP420PackageFinder.find - - -def _install_setup_requires(attrs): - # Note: do not use `setuptools.Distribution` directly, as - # our PEP 517 backend patch `distutils.core.Distribution`. - class MinimalDistribution(distutils.core.Distribution): - """ - A minimal version of a distribution for supporting the - fetch_build_eggs interface. - """ - - def __init__(self, attrs): - _incl = 'dependency_links', 'setup_requires' - filtered = {k: attrs[k] for k in set(_incl) & set(attrs)} - super().__init__(filtered) - # Prevent accidentally triggering discovery with incomplete set of attrs - self.set_defaults._disable() - - def _get_project_config_files(self, filenames=None): - """Ignore ``pyproject.toml``, they are not related to setup_requires""" - try: - cfg, toml = super()._split_standard_project_metadata(filenames) - return cfg, () - except Exception: - return filenames, () - - def finalize_options(self): - """ - Disable finalize_options to avoid building the working set. - Ref #2158. - """ - - dist = MinimalDistribution(attrs) - - # Honor setup.cfg's options. - dist.parse_config_files(ignore_option_errors=True) - if dist.setup_requires: - dist.fetch_build_eggs(dist.setup_requires) - - -def setup(**attrs): - # Make sure we have any requirements needed to interpret 'attrs'. - logging.configure() - _install_setup_requires(attrs) - return distutils.core.setup(**attrs) - - -setup.__doc__ = distutils.core.setup.__doc__ - - -_Command = monkey.get_unpatched(distutils.core.Command) - - -class Command(_Command): - """ - Setuptools internal actions are organized using a *command design pattern*. - This means that each action (or group of closely related actions) executed during - the build should be implemented as a ``Command`` subclass. - - These commands are abstractions and do not necessarily correspond to a command that - can (or should) be executed via a terminal, in a CLI fashion (although historically - they would). - - When creating a new command from scratch, custom defined classes **SHOULD** inherit - from ``setuptools.Command`` and implement a few mandatory methods. - Between these mandatory methods, are listed: - - .. method:: initialize_options(self) - - Set or (reset) all options/attributes/caches used by the command - to their default values. Note that these values may be overwritten during - the build. - - .. method:: finalize_options(self) - - Set final values for all options/attributes used by the command. - Most of the time, each option/attribute/cache should only be set if it does not - have any value yet (e.g. ``if self.attr is None: self.attr = val``). - - .. method:: run(self) - - Execute the actions intended by the command. - (Side effects **SHOULD** only take place when ``run`` is executed, - for example, creating new files or writing to the terminal output). - - A useful analogy for command classes is to think of them as subroutines with local - variables called "options". The options are "declared" in ``initialize_options()`` - and "defined" (given their final values, aka "finalized") in ``finalize_options()``, - both of which must be defined by every command class. The "body" of the subroutine, - (where it does all the work) is the ``run()`` method. - Between ``initialize_options()`` and ``finalize_options()``, ``setuptools`` may set - the values for options/attributes based on user's input (or circumstance), - which means that the implementation should be careful to not overwrite values in - ``finalize_options`` unless necessary. - - Please note that other commands (or other parts of setuptools) may also overwrite - the values of the command's options/attributes multiple times during the build - process. - Therefore it is important to consistently implement ``initialize_options()`` and - ``finalize_options()``. For example, all derived attributes (or attributes that - depend on the value of other attributes) **SHOULD** be recomputed in - ``finalize_options``. - - When overwriting existing commands, custom defined classes **MUST** abide by the - same APIs implemented by the original class. They also **SHOULD** inherit from the - original class. - """ - - command_consumes_arguments = False - - def __init__(self, dist, **kw): - """ - Construct the command for dist, updating - vars(self) with any keyword parameters. - """ - super().__init__(dist) - vars(self).update(kw) - - def _ensure_stringlike(self, option, what, default=None): - val = getattr(self, option) - if val is None: - setattr(self, option, default) - return default - elif not isinstance(val, str): - raise DistutilsOptionError( - "'%s' must be a %s (got `%s`)" % (option, what, val) - ) - return val - - def ensure_string_list(self, option): - r"""Ensure that 'option' is a list of strings. If 'option' is - currently a string, we split it either on /,\s*/ or /\s+/, so - "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become - ["foo", "bar", "baz"]. - - .. - TODO: This method seems to be similar to the one in ``distutils.cmd`` - Probably it is just here for backward compatibility with old Python versions? - - :meta private: - """ - val = getattr(self, option) - if val is None: - return - elif isinstance(val, str): - setattr(self, option, re.split(r',\s*|\s+', val)) - else: - if isinstance(val, list): - ok = all(isinstance(v, str) for v in val) - else: - ok = False - if not ok: - raise DistutilsOptionError( - "'%s' must be a list of strings (got %r)" % (option, val) - ) - - def reinitialize_command(self, command, reinit_subcommands=0, **kw): - cmd = _Command.reinitialize_command(self, command, reinit_subcommands) - vars(cmd).update(kw) - return cmd - - -def _find_all_simple(path): - """ - Find all files under 'path' - """ - results = ( - os.path.join(base, file) - for base, dirs, files in os.walk(path, followlinks=True) - for file in files - ) - return filter(os.path.isfile, results) - - -def findall(dir=os.curdir): - """ - Find all files under 'dir' and return the list of full filenames. - Unless dir is '.', return full filenames with dir prepended. - """ - files = _find_all_simple(dir) - if dir == os.curdir: - make_rel = functools.partial(os.path.relpath, start=dir) - files = map(make_rel, files) - return list(files) - - -@functools.wraps(_convert_path) -def convert_path(pathname): - from inspect import cleandoc - - msg = """ - The function `convert_path` is considered internal and not part of the public API. - Its direct usage by 3rd-party packages is considered deprecated and the function - may be removed in the future. - """ - warnings.warn(cleandoc(msg), SetuptoolsDeprecationWarning) - return _convert_path(pathname) - - -class sic(str): - """Treat this string as-is (https://en.wikipedia.org/wiki/Sic)""" - - -# Apply monkey patches -monkey.patch_all() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py deleted file mode 100644 index d4ca9b9140e3f085b36609bb8dfdaea79c78e144..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_itertools.py +++ /dev/null @@ -1,73 +0,0 @@ -from itertools import filterfalse - - -def unique_everseen(iterable, key=None): - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element - else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element - - -# copied from more_itertools 8.8 -def always_iterable(obj, base_type=(str, bytes)): - """If *obj* is iterable, return an iterator over its items:: - - >>> obj = (1, 2, 3) - >>> list(always_iterable(obj)) - [1, 2, 3] - - If *obj* is not iterable, return a one-item iterable containing *obj*:: - - >>> obj = 1 - >>> list(always_iterable(obj)) - [1] - - If *obj* is ``None``, return an empty iterable: - - >>> obj = None - >>> list(always_iterable(None)) - [] - - By default, binary and text strings are not considered iterable:: - - >>> obj = 'foo' - >>> list(always_iterable(obj)) - ['foo'] - - If *base_type* is set, objects for which ``isinstance(obj, base_type)`` - returns ``True`` won't be considered iterable. - - >>> obj = {'a': 1} - >>> list(always_iterable(obj)) # Iterate over the dict's keys - ['a'] - >>> list(always_iterable(obj, base_type=dict)) # Treat dicts as a unit - [{'a': 1}] - - Set *base_type* to ``None`` to avoid any special handling and treat objects - Python considers iterable as iterable: - - >>> obj = 'foo' - >>> list(always_iterable(obj, base_type=None)) - ['f', 'o', 'o'] - """ - if obj is None: - return iter(()) - - if (base_type is not None) and isinstance(obj, base_type): - return iter((obj,)) - - try: - return iter(obj) - except TypeError: - return iter((obj,)) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/filepost.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/filepost.py deleted file mode 100644 index 36c9252c647e67bc7353c523152568b993c1331f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/filepost.py +++ /dev/null @@ -1,98 +0,0 @@ -from __future__ import absolute_import - -import binascii -import codecs -import os -from io import BytesIO - -from .fields import RequestField -from .packages import six -from .packages.six import b - -writer = codecs.lookup("utf-8")[3] - - -def choose_boundary(): - """ - Our embarrassingly-simple replacement for mimetools.choose_boundary. - """ - boundary = binascii.hexlify(os.urandom(16)) - if not six.PY2: - boundary = boundary.decode("ascii") - return boundary - - -def iter_field_objects(fields): - """ - Iterate over fields. - - Supports list of (k, v) tuples and dicts, and lists of - :class:`~urllib3.fields.RequestField`. - - """ - if isinstance(fields, dict): - i = six.iteritems(fields) - else: - i = iter(fields) - - for field in i: - if isinstance(field, RequestField): - yield field - else: - yield RequestField.from_tuples(*field) - - -def iter_fields(fields): - """ - .. deprecated:: 1.6 - - Iterate over fields. - - The addition of :class:`~urllib3.fields.RequestField` makes this function - obsolete. Instead, use :func:`iter_field_objects`, which returns - :class:`~urllib3.fields.RequestField` objects. - - Supports list of (k, v) tuples and dicts. - """ - if isinstance(fields, dict): - return ((k, v) for k, v in six.iteritems(fields)) - - return ((k, v) for k, v in fields) - - -def encode_multipart_formdata(fields, boundary=None): - """ - Encode a dictionary of ``fields`` using the multipart/form-data MIME format. - - :param fields: - Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). - - :param boundary: - If not specified, then a random boundary will be generated using - :func:`urllib3.filepost.choose_boundary`. - """ - body = BytesIO() - if boundary is None: - boundary = choose_boundary() - - for field in iter_field_objects(fields): - body.write(b("--%s\r\n" % (boundary))) - - writer(body).write(field.render_headers()) - data = field.data - - if isinstance(data, int): - data = str(data) # Backwards compatibility - - if isinstance(data, six.text_type): - writer(body).write(data) - else: - body.write(data) - - body.write(b"\r\n") - - body.write(b("--%s--\r\n" % (boundary))) - - content_type = str("multipart/form-data; boundary=%s" % boundary) - - return body.getvalue(), content_type diff --git a/spaces/BlitzenPrancer/TheBloke-guanaco-65B-HF/README.md b/spaces/BlitzenPrancer/TheBloke-guanaco-65B-HF/README.md deleted file mode 100644 index fec7d2e52563d407fddd4c08a6fb6899ff90b414..0000000000000000000000000000000000000000 --- a/spaces/BlitzenPrancer/TheBloke-guanaco-65B-HF/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TheBloke Guanaco 65B HF -emoji: 🐨 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CNXT/GPTx/app.py b/spaces/CNXT/GPTx/app.py deleted file mode 100644 index 4205e03f91904065e1610f7e6c7b2f1de1771184..0000000000000000000000000000000000000000 --- a/spaces/CNXT/GPTx/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/gpt2").launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/__init__.py deleted file mode 100644 index f996ecd74947c504f86e3e6854a45bd74ad32c1c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .compat import downgrade_config, upgrade_config -from .config import CfgNode, get_cfg, global_cfg, set_global_cfg, configurable - -__all__ = [ - "CfgNode", - "get_cfg", - "global_cfg", - "set_global_cfg", - "downgrade_config", - "upgrade_config", - "configurable", -] diff --git a/spaces/CVPR/LIVE/shape.h b/spaces/CVPR/LIVE/shape.h deleted file mode 100644 index b549f31e73a65696b1a0ac9814ddeedba20cf121..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/shape.h +++ /dev/null @@ -1,169 +0,0 @@ -#pragma once - -#include "diffvg.h" -#include "color.h" -#include "ptr.h" -#include "vector.h" -#include "matrix.h" - -enum class ShapeType { - Circle, - Ellipse, - Path, - Rect -}; - -struct Circle { - float radius; - Vector2f center; - - ptr get_ptr() { - return ptr(this); - } -}; - -struct Ellipse { - Vector2f radius; - Vector2f center; - - ptr get_ptr() { - return ptr(this); - } -}; - -struct Path { - Path(ptr num_control_points, - ptr points, - ptr thickness, - int num_base_points, - int num_points, - bool is_closed, - bool use_distance_approx) : - num_control_points(num_control_points.get()), - points(points.get()), - thickness(thickness.get()), - num_base_points(num_base_points), - num_points(num_points), - is_closed(is_closed), - use_distance_approx(use_distance_approx) {} - - int *num_control_points; - float *points; - float *thickness; - int num_base_points; - int num_points; - bool is_closed; - bool use_distance_approx; - - bool has_thickness() const { - return thickness != nullptr; - } - void copy_to(ptr points, ptr thickness) const; - - ptr get_ptr() { - return ptr(this); - } -}; - -struct Rect { - Vector2f p_min; - Vector2f p_max; - - ptr get_ptr() { - return ptr(this); - } -}; - -struct Shape { - Shape() {} - Shape(const ShapeType &type, - ptr shape_ptr, - float stroke_width) - : type(type), ptr(shape_ptr.get()), stroke_width(stroke_width) {} - - Circle as_circle() const { - return *(Circle*)ptr; - } - - Ellipse as_ellipse() const { - return *(Ellipse*)ptr; - } - - Path as_path() const { - return *(Path*)ptr; - } - - Rect as_rect() const { - return *(Rect*)ptr; - } - - ShapeType type; - void *ptr; - float stroke_width; -}; - -struct ShapeGroup { - ShapeGroup() {} - ShapeGroup(ptr shape_ids, - int num_shapes, - const ColorType &fill_color_type, - ptr fill_color, - const ColorType &stroke_color_type, - ptr stroke_color, - bool use_even_odd_rule, - ptr shape_to_canvas) - : shape_ids(shape_ids.get()), - num_shapes(num_shapes), - fill_color_type(fill_color_type), - fill_color(fill_color.get()), - stroke_color_type(stroke_color_type), - stroke_color(stroke_color.get()), - use_even_odd_rule(use_even_odd_rule), - shape_to_canvas(shape_to_canvas.get()) { - canvas_to_shape = inverse(this->shape_to_canvas); - } - - bool has_fill_color() const { - return fill_color != nullptr; - } - - Constant fill_color_as_constant() const { - return *(Constant*)fill_color; - } - - LinearGradient fill_color_as_linear_gradient() const { - return *(LinearGradient*)fill_color; - } - - RadialGradient fill_color_as_radial_gradient() const { - return *(RadialGradient*)fill_color; - } - - bool has_stroke_color() const { - return stroke_color != nullptr; - } - - Constant stroke_color_as_constant() const { - return *(Constant*)stroke_color; - } - - LinearGradient stroke_color_as_linear_gradient() const { - return *(LinearGradient*)stroke_color; - } - - RadialGradient stroke_color_as_radial_gradient() const { - return *(RadialGradient*)stroke_color; - } - - void copy_to(ptr shape_to_canvas) const; - - int *shape_ids; - int num_shapes; - ColorType fill_color_type; - void *fill_color; - ColorType stroke_color_type; - void *stroke_color; - bool use_even_odd_rule; - Matrix3x3f canvas_to_shape; - Matrix3x3f shape_to_canvas; -}; diff --git a/spaces/CVPR/LIVE/solve.h b/spaces/CVPR/LIVE/solve.h deleted file mode 100644 index 99f730d627d4e69b0973073593fb23ac54637f06..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/solve.h +++ /dev/null @@ -1,59 +0,0 @@ -#pragma once - -#include "diffvg.h" - -template -DEVICE -inline bool solve_quadratic(T a, T b, T c, T *t0, T *t1) { - // From https://github.com/mmp/pbrt-v3/blob/master/src/core/pbrt.h#L419 - T discrim = square(b) - 4 * a * c; - if (discrim < 0) { - return false; - } - T root_discrim = sqrt(discrim); - - T q; - if (b < 0) { - q = -0.5f * (b - root_discrim); - } else { - q = -0.5f * (b + root_discrim); - } - *t0 = q / a; - *t1 = c / q; - if (*t0 > *t1) { - swap_(*t0, *t1); - } - return true; -} - -template -DEVICE -inline int solve_cubic(T a, T b, T c, T d, T t[3]) { - if (fabs(a) < 1e-6f) { - if (solve_quadratic(b, c, d, &t[0], &t[1])) { - return 2; - } else { - return 0; - } - } - // normalize cubic equation - b /= a; - c /= a; - d /= a; - T Q = (b * b - 3 * c) / 9.f; - T R = (2 * b * b * b - 9 * b * c + 27 * d) / 54.f; - if (R * R < Q * Q * Q) { - // 3 real roots - T theta = acos(R / sqrt(Q * Q * Q)); - t[0] = -2.f * sqrt(Q) * cos(theta / 3.f) - b / 3.f; - t[1] = -2.f * sqrt(Q) * cos((theta + 2.f * T(M_PI)) / 3.f) - b / 3.f; - t[2] = -2.f * sqrt(Q) * cos((theta - 2.f * T(M_PI)) / 3.f) - b / 3.f; - return 3; - } else { - T A = R > 0 ? -pow(R + sqrt(R * R - Q * Q * Q), T(1./3.)): - pow(-R + sqrt(R * R - Q * Q * Q), T(1./3.)); - T B = fabs(A) > 1e-6f ? Q / A : T(0); - t[0] = (A + B) - b / T(3); - return 1; - } -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/debug.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/debug.h deleted file mode 100644 index 16f65d67c9054f4a32a6c7a4e437b1cdb16a6c30..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/debug.h +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#ifndef THRUST_DEBUG -# ifndef NDEBUG -# if defined(DEBUG) || defined(_DEBUG) -# define THRUST_DEBUG 1 -# endif // (DEBUG || _DEBUG) -# endif // NDEBUG -#endif // THRUST_DEBUG - -#if THRUST_DEBUG -# ifndef __THRUST_SYNCHRONOUS -# define __THRUST_SYNCHRONOUS 1 -# endif // __THRUST_SYNCHRONOUS -#endif // THRUST_DEBUG - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/copy_if.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/copy_if.h deleted file mode 100644 index 31adaf8e1a7366071924404561873a1cf6b89042..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/copy_if.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy_if.h of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the copy_if.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch copy_if - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_COPY_IF_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/copy_if.h> -#include __THRUST_HOST_SYSTEM_COPY_IF_HEADER -#undef __THRUST_HOST_SYSTEM_COPY_IF_HEADER - -#define __THRUST_DEVICE_SYSTEM_COPY_IF_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/copy_if.h> -#include __THRUST_DEVICE_SYSTEM_COPY_IF_HEADER -#undef __THRUST_DEVICE_SYSTEM_COPY_IF_HEADER - diff --git a/spaces/Chomkwoy/Nilkessye/model.py b/spaces/Chomkwoy/Nilkessye/model.py deleted file mode 100644 index 959238070e0160ab0eab43b87f8bacbb1f32da35..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/model.py +++ /dev/null @@ -1,388 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from cpool_new import TopPool, BottomPool, LeftPool, RightPool - - -class pool(nn.Module): - def __init__(self, dim, pool1, pool2): - super(pool, self).__init__() - self.p1_conv1 = convolution(3, dim, 128) - self.p2_conv1 = convolution(3, dim, 128) - - self.p_conv1 = nn.Conv2d(128, dim, 3, padding=1, bias=False) - self.p_bn1 = nn.BatchNorm2d(dim) - - self.conv1 = nn.Conv2d(dim, dim, 1, bias=False) - self.bn1 = nn.BatchNorm2d(dim) - - self.conv2 = convolution(3, dim, dim) - - self.pool1 = pool1() - self.pool2 = pool2() - - self.look_conv1 = convolution(3, dim, 128) - self.look_conv2 = convolution(3, dim, 128) - self.p1_look_conv = nn.Conv2d(128, 128, 3, padding=1, bias=False) - self.p2_look_conv = nn.Conv2d(128, 128, 3, padding=1, bias=False) - - def forward(self, x): - pool1 = self.pool1(self.p1_look_conv(self.p1_conv1(x) + - self.pool2(self.look_conv1(x)))) - pool2 = self.pool2(self.p2_look_conv(self.p2_conv1(x) + - self.pool1(self.look_conv2(x)))) - - p_bn1 = self.p_bn1(self.p_conv1(pool1 + pool2)) - bn1 = self.bn1(self.conv1(x)) - - out = self.conv2(F.relu(p_bn1 + bn1, inplace=True)) - return out - - -class pool_cross(nn.Module): - def __init__(self, dim): - super(pool_cross, self).__init__() - self.p1_conv1 = convolution(3, dim, 128) - self.p2_conv1 = convolution(3, dim, 128) - - self.p_conv1 = nn.Conv2d(128, dim, 3, padding=1, bias=False) - self.p_bn1 = nn.BatchNorm2d(dim) - - self.conv1 = nn.Conv2d(dim, dim, 1, bias=False) - self.bn1 = nn.BatchNorm2d(dim) - - self.conv2 = convolution(3, dim, dim) - - self.pool_top = TopPool() - self.pool_left = LeftPool() - self.pool_bottom = BottomPool() - self.pool_right = RightPool() - - def forward(self, x): - # pool 1 - pool1 = self.pool_bottom(self.pool_top(self.p1_conv1(x))) - - # pool 2 - pool2 = self.pool_right(self.pool_left(self.p2_conv1(x))) - - # pool 1 + pool 2 - p_bn1 = self.p_bn1(self.p_conv1(pool1 + pool2)) - bn1 = self.bn1(self.conv1(x)) - - out = self.conv2(F.relu(p_bn1 + bn1, inplace=True)) - return out - - -class convolution(nn.Module): - def __init__(self, k, inp_dim, out_dim, stride=1, with_bn=True): - super(convolution, self).__init__() - - pad = (k - 1) // 2 - self.conv = nn.Conv2d(inp_dim, out_dim, (k, k), padding=(pad, pad), stride=(stride, stride), bias=not with_bn) - self.bn = nn.BatchNorm2d(out_dim) if with_bn else nn.Sequential() - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - conv = self.conv(x) - bn = self.bn(conv) - relu = self.relu(bn) - return relu - - -class residual(nn.Module): - def __init__(self, k, inp_dim, out_dim, stride=1, with_bn=True): - super(residual, self).__init__() - - self.conv1 = nn.Conv2d(inp_dim, out_dim, (3, 3), padding=(1, 1), stride=(stride, stride), bias=False) - self.bn1 = nn.BatchNorm2d(out_dim) - self.relu1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(out_dim, out_dim, (3, 3), padding=(1, 1), bias=False) - self.bn2 = nn.BatchNorm2d(out_dim) - - self.skip = nn.Sequential(nn.Conv2d(inp_dim, out_dim, (1, 1), stride=(stride, stride), bias=False), - nn.BatchNorm2d(out_dim)) \ - if stride != 1 or inp_dim != out_dim else nn.Sequential() - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - conv1 = self.conv1(x) - bn1 = self.bn1(conv1) - relu1 = self.relu1(bn1) - - conv2 = self.conv2(relu1) - bn2 = self.bn2(conv2) - - skip = self.skip(x) - return self.relu(bn2 + skip) - - -# inp_dim -> out_dim -> ... -> out_dim -def make_layer(kernel_size, inp_dim, out_dim, modules, layer, stride=1): - layers = [layer(kernel_size, inp_dim, out_dim, stride=stride)] - layers += [layer(kernel_size, out_dim, out_dim) for _ in range(modules - 1)] - return nn.Sequential(*layers) - - -# inp_dim -> inp_dim -> ... -> inp_dim -> out_dim -def make_layer_revr(kernel_size, inp_dim, out_dim, modules, layer): - layers = [layer(kernel_size, inp_dim, inp_dim) for _ in range(modules - 1)] - layers.append(layer(kernel_size, inp_dim, out_dim)) - return nn.Sequential(*layers) - - -# key point layer -def make_kp_layer(cnv_dim, curr_dim, out_dim): - return nn.Sequential(convolution(3, cnv_dim, curr_dim, with_bn=False), - nn.Conv2d(curr_dim, out_dim, (1, 1))) - - -class kp_module(nn.Module): - def __init__(self, n, dims, modules): - super(kp_module, self).__init__() - - self.n = n - - curr_modules = modules[0] - next_modules = modules[1] - - curr_dim = dims[0] - next_dim = dims[1] - - # Upper branch: Repeat curr_mod times residual, curr_dim -> curr_dim -> ... -> curr_dim - self.top = make_layer(3, curr_dim, curr_dim, curr_modules, layer=residual) - # The resolution should have been halved here... - self.down = nn.Sequential() - # Repeat curr_mod times residual, curr_dim -> next_dim -> ... -> next_dim - # In fact, the resolution is reduced by the first convolutional layer here. - self.low1 = make_layer(3, curr_dim, next_dim, curr_modules, layer=residual, stride=2) - # There is still an hourglass in the middle of hourglass - # Until the end of the recursion, repeat next_mod residual times, next_dim -> next_dim -> ... -> next_dim - if self.n > 1: - self.low2 = kp_module(n - 1, dims[1:], modules[1:]) - else: - self.low2 = make_layer(3, next_dim, next_dim, next_modules, layer=residual) - # Repeat curr_mod times residual, next_dim -> next_dim -> ... -> next_dim -> curr_dim - self.low3 = make_layer_revr(3, next_dim, curr_dim, curr_modules, layer=residual) - # Resolution here X 2 - self.up = nn.Upsample(scale_factor=2) - - def forward(self, x): - up1 = self.top(x) # upper branch residual - down = self.down(x) # Lower branch downsample (not merged) - low1 = self.low1(down) # Lower branch residual - low2 = self.low2(low1) # Lower branch hourglass - low3 = self.low3(low2) # Lower branch residual - up2 = self.up(low3) # Lower branch upsample - return up1 + up2 # Merge upper and lower branches - - -class exkp(nn.Module): - def __init__(self, n, nstack, dims, modules, num_classes=80, cnv_dim=256): - super(exkp, self).__init__() - - self.nstack = nstack - - curr_dim = dims[0] - - self.pre = nn.Sequential(convolution(7, 3, 128, stride=2), - residual(3, 128, curr_dim, stride=2)) - - self.kps = nn.ModuleList([kp_module(n, dims, modules) for _ in range(nstack)]) - - self.cnvs = nn.ModuleList([convolution(3, curr_dim, cnv_dim) for _ in range(nstack)]) - - self.inters = nn.ModuleList([residual(3, curr_dim, curr_dim) for _ in range(nstack - 1)]) - - self.inters_ = nn.ModuleList([nn.Sequential(nn.Conv2d(curr_dim, curr_dim, (1, 1), bias=False), - nn.BatchNorm2d(curr_dim)) - for _ in range(nstack - 1)]) - self.cnvs_ = nn.ModuleList([nn.Sequential(nn.Conv2d(cnv_dim, curr_dim, (1, 1), bias=False), - nn.BatchNorm2d(curr_dim)) - for _ in range(nstack - 1)]) - - self.cnvs_tl = nn.ModuleList([pool(cnv_dim, TopPool, LeftPool) for _ in range(nstack)]) - self.cnvs_br = nn.ModuleList([pool(cnv_dim, BottomPool, RightPool) for _ in range(nstack)]) - self.cnvs_ct = nn.ModuleList([pool_cross(cnv_dim) for _ in range(nstack)]) - - # heatmap layers - self.hmap_tl = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, num_classes) for _ in range(nstack)]) - self.hmap_br = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, num_classes) for _ in range(nstack)]) - self.hmap_ct = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, num_classes) for _ in range(nstack)]) - - # embedding layers - self.embd_tl = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, 16) for _ in range(nstack)]) - self.embd_br = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, 16) for _ in range(nstack)]) - - for hmap_tl, hmap_br, hmap_ct in zip(self.hmap_tl, self.hmap_br, self.hmap_ct): - hmap_tl[-1].bias.data.fill_(-2.19) - hmap_br[-1].bias.data.fill_(-2.19) - hmap_ct[-1].bias.data.fill_(-2.19) - - # regression layers - self.regs_tl = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)]) - self.regs_br = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, 2) for _ in range(nstack)]) - - # [offset x, offset y, sequence] - self.regs_ct = nn.ModuleList([make_kp_layer(cnv_dim, curr_dim, 3) for _ in range(nstack)]) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, inputs): - inter = self.pre(inputs) - - outs = [] - for ind in range(self.nstack): - kp = self.kps[ind](inter) - cnv = self.cnvs[ind](kp) - - if self.training or ind == self.nstack - 1: - cnv_tl = self.cnvs_tl[ind](cnv) - cnv_br = self.cnvs_br[ind](cnv) - cnv_ct = self.cnvs_ct[ind](cnv) - - hmap_tl, hmap_br = self.hmap_tl[ind](cnv_tl), self.hmap_br[ind](cnv_br) - embd_tl, embd_br = self.embd_tl[ind](cnv_tl), self.embd_br[ind](cnv_br) - regs_tl, regs_br = self.regs_tl[ind](cnv_tl), self.regs_br[ind](cnv_br) - - hmap_ct = self.hmap_ct[ind](cnv_ct) - regs_ct = self.regs_ct[ind](cnv_ct) - - outs.append([hmap_tl, hmap_br, hmap_ct, embd_tl, embd_br, regs_tl, regs_br, regs_ct]) - - if ind < self.nstack - 1: - inter = self.inters_[ind](inter) + self.cnvs_[ind](cnv) - inter = self.relu(inter) - inter = self.inters[ind](inter) - return outs - - -def _tranpose_and_gather_feature(feature, ind): - feature = feature.permute(0, 2, 3, 1).contiguous() # [B, C, H, W] => [B, H, W, C] - feature = feature.view(feature.size(0), -1, feature.size(3)) # [B, H, W, C] => [B, H x W, C] - ind = ind[:, :, None].expand(ind.shape[0], ind.shape[1], feature.shape[-1]) # [B, num_obj] => [B, num_obj, C] - feature = feature.gather(1, ind) # [B, H x W, C] => [B, num_obj, C] - return feature - - -def _neg_loss(preds, targets): - pos_inds = targets == 1 # todo targets > 1-epsilon ? - neg_inds = targets < 1 # todo targets < 1-epsilon ? - - neg_weights = torch.pow(1 - targets[neg_inds], 4) - - loss = 0 - for pred in preds: - pred = torch.clamp(torch.sigmoid(pred), min=1e-4, max=1 - 1e-4) - pos_pred = pred[pos_inds] - neg_pred = pred[neg_inds] - - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, 2) - neg_loss = torch.log(1 - neg_pred) * torch.pow(neg_pred, 2) * neg_weights - - num_pos = pos_inds.float().sum() - pos_loss = pos_loss.sum() - neg_loss = neg_loss.sum() - - if pos_pred.nelement() == 0: - loss = loss - neg_loss - else: - loss = loss - (pos_loss + neg_loss) / num_pos - return loss / len(preds) - - -def _ae_loss(embd0s, embd1s, mask): - num = mask.sum(dim=1, keepdim=True).unsqueeze(2).float() # [B, 1] - - pull, push = 0, 0 - for embd0, embd1 in zip(embd0s, embd1s): # [B, num_obj, vec_dim] - embd_mean = (embd0 + embd1) / 2 - - embd0 = (embd0 - embd_mean)**2 / (num + 1e-4) - embd0 = embd0[mask].sum() - embd1 = (embd1 - embd_mean)**2 / (num + 1e-4) - embd1 = embd1[mask].sum() - pull += embd0 + embd1 - - push_mask = (mask[:, None, :] + mask[:, :, None]) == 2 # [B, num_obj, num_obj] - dist = F.relu(1 - (embd_mean[:, None, :] - embd_mean[:, :, None]).abs(), inplace=True) - dist = dist - 1 / (num[:, :, None] + 1e-4) # substract diagonal elements - dist = dist / ((num - 1) * num + 1e-4)[:, :, None] # total num element is n*n-n - push += dist[push_mask].sum() - return pull / len(embd0s), push / len(embd0s) - - -def _reg_loss(regs, gt_regs, mask): - num = mask.float().sum() + 1e-4 - mask = mask[:, :, None].expand_as(gt_regs) # [B, num_obj, 2] - loss = sum([F.smooth_l1_loss(r[mask], gt_regs[mask], reduction='sum') / num for r in regs]) - return loss / len(regs) - - -def spearman(pred, target, mask): - import torchsort - x = 1e-2 - pred = torchsort.soft_rank(pred, regularization_strength=x) # [B, L] - target = torchsort.soft_rank(target, regularization_strength=x) # [B, L] - pred = pred - (pred * mask).sum(1, keepdim=True) / mask.sum(1, keepdim=True) - pred = pred / ((pred * mask)**2).sum(1, keepdim=True).sqrt() - target = target - (target * mask).sum(1, keepdim=True) / mask.sum(1, keepdim=True) - target = target / ((target * mask)**2).sum(1, keepdim=True).sqrt() - - return (pred * target * mask).sum(1) # [B] - - -def compute_loss(preds, batch, return_dict=False): - """ - batch: { - 'image' : [B, C, H, W] - 'hmap_tl', 'hmap_br', 'hmap_ct' : [n_cls, fH, fW]. Heatmap (gaussian). - 'embd_tl', 'embd_br' : [fH, fW]. Corner embeddings - 'regs_tl', 'regs_br', 'regs_ct' : [max_objs, 2]. Offsets (compensating for discretization error). In [0, 1) range. - 'inds_tl', 'inds_br', 'inds_ct' : [max_objs] int64. Indices of heatmap pixel. - 'ind_masks': [max_objs] bool. How many objects are there. - } - """ - hmap_tl, hmap_br, hmap_ct, embd_tl, embd_br, regs_tl, regs_br, regs_ct_and_seq = zip(*preds) - - embd_tl = [_tranpose_and_gather_feature(e, batch['inds_tl']) for e in embd_tl] - embd_br = [_tranpose_and_gather_feature(e, batch['inds_br']) for e in embd_br] - regs_tl = [_tranpose_and_gather_feature(r, batch['inds_tl']) for r in regs_tl] - regs_br = [_tranpose_and_gather_feature(r, batch['inds_br']) for r in regs_br] - regs_ct = [_tranpose_and_gather_feature(r, batch['inds_ct'])[..., :2] for r in regs_ct_and_seq] - seq_pred = [_tranpose_and_gather_feature(r, batch['inds_ct'])[..., 2] for r in regs_ct_and_seq] - - focal_loss = _neg_loss(hmap_tl, batch['hmap_tl']) + \ - _neg_loss(hmap_br, batch['hmap_br']) + \ - _neg_loss(hmap_ct, batch['hmap_ct']) - reg_loss = _reg_loss(regs_tl, batch['regs_tl'], batch['ind_masks']) + \ - _reg_loss(regs_br, batch['regs_br'], batch['ind_masks']) + \ - _reg_loss(regs_ct, batch['regs_ct'], batch['ind_masks']) - pull_loss, push_loss = _ae_loss(embd_tl, embd_br, batch['ind_masks']) - - loss = focal_loss + 0.1 * pull_loss + 0.1 * push_loss + reg_loss - - if 'sequence' in batch and 'sequence_valid' in batch: - rank_loss = sum( - torch.log(1 - spearman( - s[batch['sequence_valid']], - batch['sequence'][batch['sequence_valid']], - batch['ind_masks'][batch['sequence_valid']] - ) + 1e-6) - for s in seq_pred - ) / len(seq_pred) - loss = loss + rank_loss * 0.1 - else: - rank_loss = torch.tensor([0.0], device=loss.device) - - if return_dict: - return loss, { - 'focal_loss': focal_loss, - 'pull_loss': pull_loss, - 'push_loss': push_loss, - 'reg_loss': reg_loss, - 'orig_loss': focal_loss + 0.1 * pull_loss + 0.1 * push_loss + reg_loss, - 'rank_loss': rank_loss, - } - - return loss diff --git a/spaces/CikeyQI/meme-api/meme_generator/cli.py b/spaces/CikeyQI/meme-api/meme_generator/cli.py deleted file mode 100644 index d3398b3ed1edaeba7bf4df106f00ce509c7c364e..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/cli.py +++ /dev/null @@ -1,200 +0,0 @@ -import asyncio -import copy -from argparse import ArgumentParser -from pathlib import Path -from typing import Any, Dict, List - -import filetype - -from meme_generator.app import run_server -from meme_generator.config import meme_config -from meme_generator.download import check_resources -from meme_generator.exception import MemeGeneratorException, NoSuchMeme -from meme_generator.log import setup_logger -from meme_generator.manager import get_meme, get_memes - -parser = ArgumentParser("meme") -subparsers = parser.add_subparsers(dest="handle") - -list_parser = subparsers.add_parser("list", aliases=["ls"], help="查看表情列表") - -show_parser = subparsers.add_parser("info", aliases=["show"], help="查看表情详情") -show_parser.add_argument("key", type=str, help="表情名") - -preview_parser = subparsers.add_parser("preview", help="生成表情预览") -preview_parser.add_argument("key", type=str, help="表情名") - -generate_parser = subparsers.add_parser("generate", aliases=["make"], help="制作表情") -memes_subparsers = generate_parser.add_subparsers(dest="key", help="表情名") - -run_parser = subparsers.add_parser("run", aliases=["start"], help="启动 web server") - -download_parser = subparsers.add_parser("download", help="下载内置表情图片") -download_parser.add_argument( - "--url", type=str, help="指定资源链接", default=meme_config.resource.resource_url -) - - -def add_parsers(): - for meme in get_memes(): - meme_parser = ( - copy.deepcopy(meme.params_type.args_type.parser) - if meme.params_type.args_type - else ArgumentParser() - ) - meme_parser.add_argument("--images", nargs="+", default=[], help="输入图片路径") - meme_parser.add_argument("--texts", nargs="+", default=[], help="输入文字") - memes_subparsers.add_parser( - meme.key, - parents=[meme_parser], - add_help=False, - prefix_chars=meme_parser.prefix_chars, - ) - - -def list_memes() -> str: - memes = sorted(get_memes(), key=lambda meme: meme.key) - return "\n".join( - f"{i}. {meme.key} ({'/'.join(meme.keywords)})" - for i, meme in enumerate(memes, start=1) - ) - - -def meme_info(key: str) -> str: - try: - meme = get_meme(key) - except NoSuchMeme: - return f'表情 "{key}" 不存在!' - - keywords = "、".join([f'"{keyword}"' for keyword in meme.keywords]) - - patterns = "、".join([f'"{pattern}"' for pattern in meme.patterns]) - - image_num = f"{meme.params_type.min_images}" - if meme.params_type.max_images > meme.params_type.min_images: - image_num += f" ~ {meme.params_type.max_images}" - - text_num = f"{meme.params_type.min_texts}" - if meme.params_type.max_texts > meme.params_type.min_texts: - text_num += f" ~ {meme.params_type.max_texts}" - - default_texts = ", ".join([f'"{text}"' for text in meme.params_type.default_texts]) - - def arg_info(name: str, info: Dict[str, Any]) -> str: - text = ( - f' "{name}"\n' - f" 描述:{info.get('description', '')}\n" - f" 类型:`{info.get('type', '')}`\n" - f" 默认值:`{info.get('default', '')}`" - ) - if enum := info.get("enum", []): - assert isinstance(enum, list) - text += "\n 可选值:" + "、".join([f'"{e}"' for e in enum]) - return text - - if args := meme.params_type.args_type: - model = args.model - properties: Dict[str, Dict[str, Any]] = model.schema().get("properties", {}) - properties.pop("user_infos") - args_info = "\n" + "\n".join( - [arg_info(name, info) for name, info in properties.items()] - ) - else: - args_info = "" - - return ( - f"表情名:{meme.key}\n" - + f"关键词:{keywords}\n" - + (f"正则表达式:{patterns}\n" if patterns else "") - + "参数:\n" - + f" 需要图片数目:{image_num}\n" - + f" 需要文字数目:{text_num}\n" - + (f" 默认文字:[{default_texts}]\n" if default_texts else "") - + (f" 其他参数:{args_info}\n" if args_info else "") - ) - - -def generate_meme_preview(key: str) -> str: - try: - meme = get_meme(key) - except NoSuchMeme: - return f'表情 "{key}" 不存在!' - - try: - loop = asyncio.new_event_loop() - result = loop.run_until_complete(meme.generate_preview()) - content = result.getvalue() - ext = filetype.guess_extension(content) - filename = f"result.{ext}" - with open(filename, "wb") as f: - f.write(content) - return f'表情制作成功!生成的表情文件为 "{filename}"' - except MemeGeneratorException as e: - return str(e) - - -def generate_meme( - key: str, images: List[str], texts: List[str], args: Dict[str, Any] -) -> str: - try: - meme = get_meme(key) - except NoSuchMeme: - return f'表情 "{key}" 不存在!' - - for image in images: - if not Path(image).exists(): - return f'图片路径 "{image}" 不存在!' - - try: - loop = asyncio.new_event_loop() - result = loop.run_until_complete(meme(images=images, texts=texts, args=args)) - content = result.getvalue() - ext = filetype.guess_extension(content) - filename = f"result.{ext}" - with open(filename, "wb") as f: - f.write(content) - return f'表情制作成功!生成的表情文件为 "{filename}"' - except MemeGeneratorException as e: - return str(e) - - -def main(): - setup_logger() - add_parsers() - - args = parser.parse_args() - handle = str(args.handle) - - if handle in ["list", "ls"]: - print(list_memes()) - - elif handle in ["info", "show"]: - key = str(args.key) - print(meme_info(key)) - - elif handle in ["preview"]: - key = str(args.key) - print(generate_meme_preview(key)) - - elif handle in ["generate", "make"]: - kwargs = vars(args) - kwargs.pop("handle") - key: str = kwargs.pop("key") - images: List[str] = kwargs.pop("images") - texts: List[str] = kwargs.pop("texts") - print(generate_meme(key, images, texts, kwargs)) - - elif handle in ["run", "start"]: - run_server() - - elif handle in ["download"]: - meme_config.resource.resource_url = args.url - loop = asyncio.new_event_loop() - loop.run_until_complete(check_resources()) - - else: - print(parser.format_help()) - - -if __name__ == "__main__": - main() diff --git a/spaces/Cletrason/Cletrason-toad-mario-movie/app_pix2pix_video.py b/spaces/Cletrason/Cletrason-toad-mario-movie/app_pix2pix_video.py deleted file mode 100644 index fec64a5abc56801d56c602b915338bce63f79a17..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/Cletrason-toad-mario-movie/app_pix2pix_video.py +++ /dev/null @@ -1,103 +0,0 @@ -import gradio as gr -from model import Model -import os -on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR" - - -def create_demo(model: Model): - examples = [ - ['__assets__/pix2pix_video_2fps/camel.mp4', - 'make it Van Gogh Starry Night style', 512, 0, 1.0], - ['__assets__/pix2pix_video_2fps/mini-cooper.mp4', - 'make it Picasso style', 512, 0, 1.5], - ['__assets__/pix2pix_video_2fps/snowboard.mp4', - 'replace man with robot', 512, 0, 1.0], - ['__assets__/pix2pix_video_2fps/white-swan.mp4', - 'replace swan with mallard', 512, 0, 1.5], - ['__assets__/pix2pix_video_2fps/boat.mp4', - 'add city skyline in the background', 512, 0, 1.5], - ['__assets__/pix2pix_video_2fps/ballet.mp4', - 'make her a golden sculpture', 512, 0, 1.0], - ] - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Video Instruct Pix2Pix') - with gr.Row(): - gr.HTML( - """ -
    -

    - Description: For performance purposes, our current preview release supports any input videos but caps output videos after 80 frames and the input videos are scaled down before processing. For faster inference you can choose lower output frames per seconds from Advanced Options. -

    -
    - """) - - with gr.Row(): - with gr.Column(): - input_image = gr.Video(label="Input Video", source='upload', - type='numpy', format="mp4", visible=True).style(height="auto") - with gr.Column(): - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - watermark = gr.Radio(["Picsart AI Research", "Text2Video-Zero", - "None"], label="Watermark", value='Picsart AI Research') - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=1024, - value=512, - step=64) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=65536, - value=0, - step=1) - image_guidance = gr.Slider(label='Image guidance scale', - minimum=0.5, - maximum=2, - value=1.0, - step=0.1) - start_t = gr.Slider(label='Starting time in seconds', - minimum=0, - maximum=10, - value=0, - step=1) - end_t = gr.Slider(label='End time in seconds (-1 corresponds to uploaded video duration)', - minimum=0, - maximum=10, - value=-1, - step=1) - out_fps = gr.Slider(label='Output video fps (-1 corresponds to uploaded video fps)', - minimum=1, - maximum=30, - value=-1, - step=1) - chunk_size = gr.Slider( - label="Chunk size", minimum=2, maximum=16, value=12 if on_huggingspace else 8, step=1, visible=not on_huggingspace) - with gr.Column(): - result = gr.Video(label='Output', show_label=True) - inputs = [ - input_image, - prompt, - image_resolution, - seed, - image_guidance, - start_t, - end_t, - out_fps, - chunk_size, - watermark - ] - - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=model.process_pix2pix, - cache_examples=on_huggingspace, - run_on_click=False, - ) - - run_button.click(fn=model.process_pix2pix, - inputs=inputs, - outputs=result) - return demo diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/events.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/events.py deleted file mode 100644 index 524bceb312391b3458a0817ade97fa3788098965..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/events.py +++ /dev/null @@ -1,347 +0,0 @@ -"""Contains all of the events that can be triggered in a gr.Blocks() app, with the exception -of the on-page-load event, which is defined in gr.Blocks().load().""" - -from __future__ import annotations - -from typing import TYPE_CHECKING, Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group - -from gradio.blocks import Block -from gradio.deprecation import warn_deprecation -from gradio.helpers import EventData -from gradio.utils import get_cancel_function - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - from gradio.components import Component - -set_documentation_group("events") - - -def set_cancel_events( - block: Block, event_name: str, cancels: None | dict[str, Any] | list[dict[str, Any]] -): - if cancels: - if not isinstance(cancels, list): - cancels = [cancels] - cancel_fn, fn_indices_to_cancel = get_cancel_function(cancels) - block.set_event_trigger( - event_name, - cancel_fn, - inputs=None, - outputs=None, - queue=False, - preprocess=False, - cancels=fn_indices_to_cancel, - ) - - -class EventListener(Block): - def __init__(self: Any): - for event_listener_class in EventListener.__subclasses__(): - if isinstance(self, event_listener_class): - event_listener_class.__init__(self) - - -class Dependency(dict): - def __init__(self, trigger, key_vals, dep_index): - super().__init__(key_vals) - self.trigger = trigger - self.then = EventListenerMethod( - self.trigger, - "then", - trigger_after=dep_index, - trigger_only_on_success=False, - ) - """ - Triggered after directly preceding event is completed, regardless of success or failure. - """ - self.success = EventListenerMethod( - self.trigger, - "success", - trigger_after=dep_index, - trigger_only_on_success=True, - ) - """ - Triggered after directly preceding event is completed, if it was successful. - """ - - -class EventListenerMethod: - """ - Triggered on an event deployment. - """ - - def __init__( - self, - trigger: Block, - event_name: str, - show_progress: Literal["full", "minimal", "hidden"] = "full", - callback: Callable | None = None, - trigger_after: int | None = None, - trigger_only_on_success: bool = False, - ): - self.trigger = trigger - self.event_name = event_name - self.show_progress = show_progress - self.callback = callback - self.trigger_after = trigger_after - self.trigger_only_on_success = trigger_only_on_success - - def __call__( - self, - fn: Callable | None, - inputs: Component | list[Component] | set[Component] | None = None, - outputs: Component | list[Component] | None = None, - api_name: str | None | Literal[False] = None, - status_tracker: None = None, - scroll_to_output: bool = False, - show_progress: Literal["full", "minimal", "hidden"] = "full", - queue: bool | None = None, - batch: bool = False, - max_batch_size: int = 4, - preprocess: bool = True, - postprocess: bool = True, - cancels: dict[str, Any] | list[dict[str, Any]] | None = None, - every: float | None = None, - _js: str | None = None, - ) -> Dependency: - """ - Parameters: - fn: the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component. - inputs: List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list. - outputs: List of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list. - api_name: Defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name. - scroll_to_output: If True, will scroll to output component on completion - show_progress: If True, will show progress animation while pending - queue: If True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app. - batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component. - max_batch_size: Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True) - preprocess: If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component). - postprocess: If False, will not run postprocessing of component data before returning 'fn' output to the browser. - cancels: A list of other events to cancel when This listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish. - every: Run this event 'every' number of seconds while the client connection is open. Interpreted in seconds. Queue must be enabled. - """ - if status_tracker: - warn_deprecation( - "The 'status_tracker' parameter has been deprecated and has no effect." - ) - if self.event_name == "stop": - warn_deprecation( - "The `stop` event on Video and Audio has been deprecated and will be remove in a future version. Use `ended` instead." - ) - - if isinstance(self, Streamable): - self.check_streamable() - if isinstance(show_progress, bool): - show_progress = "full" if show_progress else "hidden" - - dep, dep_index = self.trigger.set_event_trigger( - self.event_name, - fn, - inputs, - outputs, - preprocess=preprocess, - postprocess=postprocess, - scroll_to_output=scroll_to_output, - show_progress=show_progress - if show_progress is not None - else self.show_progress, - api_name=api_name, - js=_js, - queue=queue, - batch=batch, - max_batch_size=max_batch_size, - every=every, - trigger_after=self.trigger_after, - trigger_only_on_success=self.trigger_only_on_success, - ) - set_cancel_events(self.trigger, self.event_name, cancels) - if self.callback: - self.callback() - return Dependency(self.trigger, dep, dep_index) - - -@document("*change", inherit=True) -class Changeable(EventListener): - def __init__(self): - self.change = EventListenerMethod(self, "change") - """ - This listener is triggered when the component's value changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). - See `.input()` for a listener that is only triggered by user input. - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*input", inherit=True) -class Inputable(EventListener): - def __init__(self): - self.input = EventListenerMethod(self, "input") - """ - This listener is triggered when the user changes the value of the component. - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*click", inherit=True) -class Clickable(EventListener): - def __init__(self): - self.click = EventListenerMethod(self, "click") - """ - This listener is triggered when the component (e.g. a button) is clicked. - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*submit", inherit=True) -class Submittable(EventListener): - def __init__(self): - self.submit = EventListenerMethod(self, "submit") - """ - This listener is triggered when the user presses the Enter key while the component (e.g. a textbox) is focused. - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*edit", inherit=True) -class Editable(EventListener): - def __init__(self): - self.edit = EventListenerMethod(self, "edit") - """ - This listener is triggered when the user edits the component (e.g. image) using the - built-in editor. This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*clear", inherit=True) -class Clearable(EventListener): - def __init__(self): - self.clear = EventListenerMethod(self, "clear") - """ - This listener is triggered when the user clears the component (e.g. image or audio) - using the X button for the component. This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*play", "*pause", "*stop", "*end", inherit=True) -class Playable(EventListener): - def __init__(self): - self.play = EventListenerMethod(self, "play") - """ - This listener is triggered when the user plays the component (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - self.pause = EventListenerMethod(self, "pause") - """ - This listener is triggered when the media stops playing for any reason (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - self.stop = EventListenerMethod(self, "stop") - """ - This listener is triggered when the user reaches the end of the media track (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - self.end = EventListenerMethod(self, "end") - """ - This listener is triggered when the user reaches the end of the media track (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*stream", inherit=True) -class Streamable(EventListener): - def __init__(self): - self.streaming: bool - self.stream = EventListenerMethod( - self, - "stream", - show_progress="hidden", - callback=lambda: setattr(self, "streaming", True), - ) - """ - This listener is triggered when the user streams the component (e.g. a live webcam - component). This method can be used when this component is in a Gradio Blocks. - """ - - def check_streamable(self): - pass - - -@document("*start_recording", "*stop_recording", inherit=True) -class Recordable(EventListener): - def __init__(self): - self.start_recording = EventListenerMethod(self, "start_recording") - """ - This listener is triggered when the user starts recording with the component (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - self.stop_recording = EventListenerMethod(self, "stop_recording") - """ - This listener is triggered when the user stops recording with the component (e.g. audio or video). - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*blur", inherit=True) -class Blurrable(EventListener): - def __init__(self): - self.blur = EventListenerMethod(self, "blur") - """ - This listener is triggered when the component's is unfocused/blurred (e.g. when the user clicks outside of a textbox). - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*upload", inherit=True) -class Uploadable(EventListener): - def __init__(self): - self.upload = EventListenerMethod(self, "upload") - """ - This listener is triggered when the user uploads a file into the component (e.g. when the user uploads a video into a video component). - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*release", inherit=True) -class Releaseable(EventListener): - def __init__(self): - self.release = EventListenerMethod(self, "release") - """ - This listener is triggered when the user releases the mouse on this component (e.g. when the user releases the slider). - This method can be used when this component is in a Gradio Blocks. - """ - - -@document("*select", inherit=True) -class Selectable(EventListener): - def __init__(self): - self.selectable: bool = False - self.select = EventListenerMethod( - self, "select", callback=lambda: setattr(self, "selectable", True) - ) - """ - This listener is triggered when the user selects from within the Component. - This event has EventData of type gradio.SelectData that carries information, accessible through SelectData.index and SelectData.value. - See EventData documentation on how to use this event data. - """ - - -class SelectData(EventData): - def __init__(self, target: Block | None, data: Any): - super().__init__(target, data) - self.index: int | tuple[int, int] = data["index"] - """ - The index of the selected item. Is a tuple if the component is two dimensional or selection is a range. - """ - self.value: Any = data["value"] - """ - The value of the selected item. - """ - self.selected: bool = data.get("selected", True) - """ - True if the item was selected, False if deselected. - """ diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css deleted file mode 100644 index 867db01e98d8648a1afa22a934018f3ef506a4ae..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css +++ /dev/null @@ -1 +0,0 @@ -canvas.svelte-yigbas{display:block;position:absolute;inset:0;margin:auto}.lr.svelte-yigbas{border-right:1px solid var(--border-color-primary);border-left:1px solid var(--border-color-primary)}.tb.svelte-yigbas{border-top:1px solid var(--border-color-primary);border-bottom:1px solid var(--border-color-primary)}canvas.svelte-yigbas:hover{cursor:none}.wrap.svelte-yigbas{position:relative;width:var(--size-full);height:var(--size-full);touch-action:none}.start-prompt.svelte-yigbas{display:flex;position:absolute;inset:0;justify-content:center;align-items:center;z-index:var(--layer-4);touch-action:none;pointer-events:none;color:var(--body-text-color-subdued)}.wrap.svelte-425ent{position:relative;width:var(--size-full);height:var(--size-full);min-height:var(--size-60)}video.svelte-425ent{width:var(--size-full);height:var(--size-full)}button.svelte-425ent{display:flex;position:absolute;right:0;bottom:var(--size-2);left:0;justify-content:center;align-items:center;margin:auto;box-shadow:var(--shadow-drop-lg);border-radius:var(--radius-xl);background-color:#000000e6;width:var(--size-10);height:var(--size-10)}@media (min-width: 768px){button.svelte-425ent{bottom:var(--size-4)}}@media (min-width: 1280px){button.svelte-425ent{bottom:var(--size-8)}}.icon.svelte-425ent{opacity:.8;width:50%;height:50%;color:#fff}.flip.svelte-425ent{transform:scaleX(-1)}div.svelte-s6ybro{display:flex;position:absolute;top:var(--size-2);right:var(--size-2);justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-5)}.wrap.svelte-p4aq0j.svelte-p4aq0j{display:flex;position:absolute;top:var(--size-10);right:var(--size-2);flex-direction:column;justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-5)}.brush.svelte-p4aq0j.svelte-p4aq0j{top:0;right:0}.brush.svelte-p4aq0j input.svelte-p4aq0j{position:absolute;top:3px;right:calc(100% + 5px)}.col.svelte-p4aq0j input.svelte-p4aq0j{position:absolute;right:calc(100% + 5px);bottom:-4px}.image-container.svelte-p3y7hu,img.svelte-p3y7hu{width:var(--size-full);height:var(--size-full)}img.svelte-p3y7hu{object-fit:contain}.selectable.svelte-p3y7hu{cursor:crosshair}.absolute-img.svelte-p3y7hu{position:absolute;opacity:0}.webcam.svelte-p3y7hu{transform:scaleX(-1)}img.svelte-1btp92j{width:var(--size-full);height:var(--size-full);object-fit:contain}.selectable.svelte-1btp92j{cursor:crosshair}.icon-buttons.svelte-1btp92j{display:flex;position:absolute;top:6px;right:6px;gap:var(--size-1)} diff --git a/spaces/Daniton/MagicPrompt-Stable-Diffusion/app.py b/spaces/Daniton/MagicPrompt-Stable-Diffusion/app.py deleted file mode 100644 index 18a6adc75812a56a0961fcc4ecbd90d02529864d..0000000000000000000000000000000000000000 --- a/spaces/Daniton/MagicPrompt-Stable-Diffusion/app.py +++ /dev/null @@ -1,98 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re -import os -import sys - - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') - - -def generate(starting_text): - with open("ideas.txt", "r") as f: - line = f.readlines() - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").capitalize() - starting_text: str = re.sub(r"\.", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 80)), num_return_sequences=1) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp) - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - -with grad.Blocks(css='style.css') as demo: - grad.HTML( - """ -
    - """ - ) - with grad.Column(elem_id="col-container"): - with grad.Row(variant="compact"): - txt = grad.Textbox( - label="Initial Text", - show_label=False, - max_lines=1, - placeholder="Enter a basic idea", - ).style( - container=False, - ) - run = grad.Button("✨ Magic Prompt ✨").style(full_width=False) - - - - with grad.Row(variant="compact"): - out = grad.Textbox( - label="Generated Text", - show_label=False, - lines=5, - ).style( - container=False, - ) - - run.click(generate, inputs=[txt], outputs=[out]) - - - - with grad.Row(): - grad.HTML( - """ - -
    -

    Transform your boring ideas into creative masterpieces with just one click! Enter a spark of inspiration and let the "Magic Prompt" button work its magic. -

    -
    - """ -) - - - fn=generate, - run=generate, - inputs=txt, - outputs=out - demo.launch(enable_queue=False, inline=True) \ No newline at end of file diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer_modules.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer_modules.py deleted file mode 100644 index 475d5047e8b08d51e7a91ead1bf158f004698d08..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/transformer_modules.py +++ /dev/null @@ -1,250 +0,0 @@ -""" -@Date: 2021/09/01 -@description: -""" -import warnings -import math -import torch -import torch.nn.functional as F - -from torch import nn, einsum -from einops import rearrange - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - - -# compatibility pytorch < 1.4 -class GELU(nn.Module): - def forward(self, input): - return F.gelu(input) - - -class Attend(nn.Module): - - def __init__(self, dim=None): - super().__init__() - self.dim = dim - - def forward(self, input): - return F.softmax(input, dim=self.dim, dtype=input.dtype) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout=0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - GELU(), - nn.Dropout(dropout), - nn.Linear(hidden_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x): - return self.net(x) - - -class RelativePosition(nn.Module): - def __init__(self, heads, patch_num=None, rpe=None): - super().__init__() - self.rpe = rpe - self.heads = heads - self.patch_num = patch_num - - if rpe == 'lr_parameter': - # -255 ~ 0 ~ 255 all count : patch * 2 - 1 - count = patch_num * 2 - 1 - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'lr_parameter_mirror': - # 0 ~ 127 128 ~ 1 all count : patch_num // 2 + 1 - count = patch_num // 2 + 1 - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'lr_parameter_half': - # -127 ~ 0 ~ 128 all count : patch - count = patch_num - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'fix_angle': - # 0 ~ 127 128 ~ 1 all count : patch_num // 2 + 1 - count = patch_num // 2 + 1 - # we think that closer proximity should have stronger relationships - rpe_table = (torch.arange(count, 0, -1) / count)[..., None].repeat(1, heads) - self.register_buffer('rpe_table', rpe_table) - - def get_relative_pos_embed(self): - range_vec = torch.arange(self.patch_num) - distance_mat = range_vec[None, :] - range_vec[:, None] - if self.rpe == 'lr_parameter': - # -255 ~ 0 ~ 255 -> 0 ~ 255 ~ 255 + 255 - distance_mat += self.patch_num - 1 # remove negative - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - elif self.rpe == 'lr_parameter_mirror' or self.rpe == 'fix_angle': - distance_mat[distance_mat < 0] = -distance_mat[distance_mat < 0] # mirror - distance_mat[distance_mat > self.patch_num // 2] = self.patch_num - distance_mat[ - distance_mat > self.patch_num // 2] # remove repeat - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - elif self.rpe == 'lr_parameter_half': - distance_mat[distance_mat > self.patch_num // 2] = distance_mat[ - distance_mat > self.patch_num // 2] - self.patch_num # remove repeat > 128 exp: 129 -> -127 - distance_mat[distance_mat < -self.patch_num // 2 + 1] = distance_mat[ - distance_mat < -self.patch_num // 2 + 1] + self.patch_num # remove repeat < -127 exp: -128 -> 128 - # -127 ~ 0 ~ 128 -> 0 ~ 0 ~ 127 + 127 + 128 - distance_mat += self.patch_num//2 - 1 # remove negative - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - - def forward(self, attn): - return attn + self.get_relative_pos_embed() - - -class Attention(nn.Module): - def __init__(self, dim, heads=8, dim_head=64, dropout=0., patch_num=None, rpe=None, rpe_pos=1): - """ - :param dim: - :param heads: - :param dim_head: - :param dropout: - :param patch_num: - :param rpe: relative position embedding - """ - super().__init__() - - self.relative_pos_embed = None if patch_num is None or rpe is None else RelativePosition(heads, patch_num, rpe) - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - self.rpe_pos = rpe_pos - - self.attend = Attend(dim=-1) - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x): - b, n, _, h = *x.shape, self.heads - qkv = self.to_qkv(x).chunk(3, dim=-1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - - if self.rpe_pos == 0: - if self.relative_pos_embed is not None: - dots = self.relative_pos_embed(dots) - - attn = self.attend(dots) - - if self.rpe_pos == 1: - if self.relative_pos_embed is not None: - attn = self.relative_pos_embed(attn) - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - - -class AbsolutePosition(nn.Module): - def __init__(self, dim, dropout=0., patch_num=None, ape=None): - super().__init__() - self.ape = ape - - if ape == 'lr_parameter': - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, patch_num, dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - elif ape == 'fix_angle': - angle = torch.arange(0, patch_num, dtype=torch.float) / patch_num * (math.pi * 2) - self.absolute_pos_embed = torch.sin(angle)[..., None].repeat(1, dim)[None] - - def forward(self, x): - return x + self.absolute_pos_embed - - -class WinAttention(nn.Module): - def __init__(self, dim, win_size=8, shift=0, heads=8, dim_head=64, dropout=0., rpe=None, rpe_pos=1): - super().__init__() - - self.win_size = win_size - self.shift = shift - self.attend = Attention(dim, heads=heads, dim_head=dim_head, - dropout=dropout, patch_num=win_size, rpe=None if rpe is None else 'lr_parameter', - rpe_pos=rpe_pos) - - def forward(self, x): - b = x.shape[0] - if self.shift != 0: - x = torch.roll(x, shifts=self.shift, dims=-2) - x = rearrange(x, 'b (m w) d -> (b m) w d', w=self.win_size) # split windows - - out = self.attend(x) - - out = rearrange(out, '(b m) w d -> b (m w) d ', b=b) # recover windows - if self.shift != 0: - out = torch.roll(out, shifts=-self.shift, dims=-2) - - return out - - -class Conv(nn.Module): - def __init__(self, dim, dropout=0.): - super().__init__() - self.dim = dim - self.net = nn.Sequential( - nn.Conv1d(dim, dim, kernel_size=3, stride=1, padding=0), - nn.Dropout(dropout) - ) - - def forward(self, x): - x = x.transpose(1, 2) - x = torch.cat([x[..., -1:], x, x[..., :1]], dim=-1) - x = self.net(x) - return x.transpose(1, 2) diff --git a/spaces/DemoLou/moe-tts/text/korean.py b/spaces/DemoLou/moe-tts/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/DemoLou/moe-tts/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/projector.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/projector.py deleted file mode 100644 index d63ad3573696cc22640cbeddc197d8cb15c52977..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/projector.py +++ /dev/null @@ -1,203 +0,0 @@ -import argparse -import math -import os - -import torch -from torch import optim -from torch.nn import functional as F -from torchvision import transforms -from PIL import Image -from tqdm import tqdm - -import lpips -from model import Generator - - -def noise_regularize(noises): - loss = 0 - - for noise in noises: - size = noise.shape[2] - - while True: - loss = ( - loss - + (noise * torch.roll(noise, shifts=1, dims=3)).mean().pow(2) - + (noise * torch.roll(noise, shifts=1, dims=2)).mean().pow(2) - ) - - if size <= 8: - break - - noise = noise.reshape([1, 1, size // 2, 2, size // 2, 2]) - noise = noise.mean([3, 5]) - size //= 2 - - return loss - - -def noise_normalize_(noises): - for noise in noises: - mean = noise.mean() - std = noise.std() - - noise.data.add_(-mean).div_(std) - - -def get_lr(t, initial_lr, rampdown=0.25, rampup=0.05): - lr_ramp = min(1, (1 - t) / rampdown) - lr_ramp = 0.5 - 0.5 * math.cos(lr_ramp * math.pi) - lr_ramp = lr_ramp * min(1, t / rampup) - - return initial_lr * lr_ramp - - -def latent_noise(latent, strength): - noise = torch.randn_like(latent) * strength - - return latent + noise - - -def make_image(tensor): - return ( - tensor.detach() - .clamp_(min=-1, max=1) - .add(1) - .div_(2) - .mul(255) - .type(torch.uint8) - .permute(0, 2, 3, 1) - .to('cpu') - .numpy() - ) - - -if __name__ == '__main__': - device = 'cuda' - - parser = argparse.ArgumentParser() - parser.add_argument('--ckpt', type=str, required=True) - parser.add_argument('--size', type=int, default=256) - parser.add_argument('--lr_rampup', type=float, default=0.05) - parser.add_argument('--lr_rampdown', type=float, default=0.25) - parser.add_argument('--lr', type=float, default=0.1) - parser.add_argument('--noise', type=float, default=0.05) - parser.add_argument('--noise_ramp', type=float, default=0.75) - parser.add_argument('--step', type=int, default=1000) - parser.add_argument('--noise_regularize', type=float, default=1e5) - parser.add_argument('--mse', type=float, default=0) - parser.add_argument('--w_plus', action='store_true') - parser.add_argument('files', metavar='FILES', nargs='+') - - args = parser.parse_args() - - n_mean_latent = 10000 - - resize = min(args.size, 256) - - transform = transforms.Compose( - [ - transforms.Resize(resize), - transforms.CenterCrop(resize), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ] - ) - - imgs = [] - - for imgfile in args.files: - img = transform(Image.open(imgfile).convert('RGB')) - imgs.append(img) - - imgs = torch.stack(imgs, 0).to(device) - - g_ema = Generator(args.size, 512, 8) - g_ema.load_state_dict(torch.load(args.ckpt)['g_ema'], strict=False) - g_ema.eval() - g_ema = g_ema.to(device) - - with torch.no_grad(): - noise_sample = torch.randn(n_mean_latent, 512, device=device) - latent_out = g_ema.style(noise_sample) - - latent_mean = latent_out.mean(0) - latent_std = ((latent_out - latent_mean).pow(2).sum() / n_mean_latent) ** 0.5 - - percept = lpips.PerceptualLoss( - model='net-lin', net='vgg', use_gpu=device.startswith('cuda') - ) - - noises = g_ema.make_noise() - - latent_in = latent_mean.detach().clone().unsqueeze(0).repeat(2, 1) - - if args.w_plus: - latent_in = latent_in.unsqueeze(1).repeat(1, g_ema.n_latent, 1) - - latent_in.requires_grad = True - - for noise in noises: - noise.requires_grad = True - - optimizer = optim.Adam([latent_in] + noises, lr=args.lr) - - pbar = tqdm(range(args.step)) - latent_path = [] - - for i in pbar: - t = i / args.step - lr = get_lr(t, args.lr) - optimizer.param_groups[0]['lr'] = lr - noise_strength = latent_std * args.noise * max(0, 1 - t / args.noise_ramp) ** 2 - latent_n = latent_noise(latent_in, noise_strength.item()) - - img_gen, _ = g_ema([latent_n], input_is_latent=True, noise=noises) - - batch, channel, height, width = img_gen.shape - - if height > 256: - factor = height // 256 - - img_gen = img_gen.reshape( - batch, channel, height // factor, factor, width // factor, factor - ) - img_gen = img_gen.mean([3, 5]) - - p_loss = percept(img_gen, imgs).sum() - n_loss = noise_regularize(noises) - mse_loss = F.mse_loss(img_gen, imgs) - - loss = p_loss + args.noise_regularize * n_loss + args.mse * mse_loss - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - noise_normalize_(noises) - - if (i + 1) % 100 == 0: - latent_path.append(latent_in.detach().clone()) - - pbar.set_description( - ( - f'perceptual: {p_loss.item():.4f}; noise regularize: {n_loss.item():.4f};' - f' mse: {mse_loss.item():.4f}; lr: {lr:.4f}' - ) - ) - - result_file = {'noises': noises} - - img_gen, _ = g_ema([latent_path[-1]], input_is_latent=True, noise=noises) - - filename = os.path.splitext(os.path.basename(args.files[0]))[0] + '.pt' - - img_ar = make_image(img_gen) - - for i, input_name in enumerate(args.files): - result_file[input_name] = {'img': img_gen[i], 'latent': latent_in[i]} - img_name = os.path.splitext(os.path.basename(input_name))[0] + '-project.png' - pil_img = Image.fromarray(img_ar[i]) - pil_img.save(img_name) - - torch.save(result_file, filename) diff --git a/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.py b/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.py deleted file mode 100644 index 394f746e0096ececc7b6c83daf75c21cb808385f..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.py +++ /dev/null @@ -1,389 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient resampling of 2D images.""" - -import os -import numpy as np -import torch - -from .. import custom_ops -from .. import misc -from . import conv2d_gradfix - -#---------------------------------------------------------------------------- - -_plugin = None - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='upfirdn2d_plugin', - sources=['upfirdn2d.cpp', 'upfirdn2d.cu'], - headers=['upfirdn2d.h'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math', '--allow-unsupported-compiler'], - ) - return True - -def _parse_scaling(scaling): - if isinstance(scaling, int): - scaling = [scaling, scaling] - assert isinstance(scaling, (list, tuple)) - assert all(isinstance(x, int) for x in scaling) - sx, sy = scaling - assert sx >= 1 and sy >= 1 - return sx, sy - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, int) for x in padding) - if len(padding) == 2: - padx, pady = padding - padding = [padx, padx, pady, pady] - padx0, padx1, pady0, pady1 = padding - return padx0, padx1, pady0, pady1 - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - fw = f.shape[-1] - fh = f.shape[0] - with misc.suppress_tracer_warnings(): - fw = int(fw) - fh = int(fh) - misc.assert_shape(f, [fh, fw][:f.ndim]) - assert fw >= 1 and fh >= 1 - return fw, fh - -#---------------------------------------------------------------------------- - -def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None): - r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f: Torch tensor, numpy array, or python list of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), - `[]` (impulse), or - `None` (identity). - device: Result device (default: cpu). - normalize: Normalize the filter so that it retains the magnitude - for constant input signal (DC)? (default: True). - flip_filter: Flip the filter? (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - Float32 tensor of the shape - `[filter_height, filter_width]` (non-separable) or - `[filter_taps]` (separable). - """ - # Validate. - if f is None: - f = 1 - f = torch.as_tensor(f, dtype=torch.float32) - assert f.ndim in [0, 1, 2] - assert f.numel() > 0 - if f.ndim == 0: - f = f[np.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.numel() >= 8) - if f.ndim == 1 and not separable: - f = f.ger(f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= f.sum() - if flip_filter: - f = f.flip(list(range(f.ndim))) - f = f * (gain ** (f.ndim / 2)) - f = f.to(device=device) - return f - -#---------------------------------------------------------------------------- - -def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Pad, upsample, filter, and downsample a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 2. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 4. Downsample the image by keeping every Nth pixel (`down`). - - This sequence of operations bears close resemblance to scipy.signal.upfirdn(). - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f) - return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1): - """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - assert f.dtype == torch.float32 and not f.requires_grad - batch_size, num_channels, in_height, in_width = x.shape - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Check that upsampled buffer is not smaller than the filter. - upW = in_width * upx + padx0 + padx1 - upH = in_height * upy + pady0 + pady1 - assert upW >= f.shape[-1] and upH >= f.shape[0] - - # Upsample by inserting zeros. - x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1]) - x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1]) - x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx]) - - # Pad or crop. - x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)]) - x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)] - - # Setup filter. - f = f * (gain ** (f.ndim / 2)) - f = f.to(x.dtype) - if not flip_filter: - f = f.flip(list(range(f.ndim))) - - # Convolve with the filter. - f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim) - if f.ndim == 4: - x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels) - else: - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels) - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels) - - # Downsample by throwing away pixels. - x = x[:, :, ::downy, ::downx] - return x - -#---------------------------------------------------------------------------- - -_upfirdn2d_cuda_cache = dict() - -def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1): - """Fast CUDA implementation of `upfirdn2d()` using custom ops. - """ - # Parse arguments. - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Lookup from cache. - key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - if key in _upfirdn2d_cuda_cache: - return _upfirdn2d_cuda_cache[key] - - # Forward op. - class Upfirdn2dCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, f): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - if f.ndim == 1 and f.shape[0] == 1: - f = f.square().unsqueeze(0) # Convert separable-1 into full-1x1. - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - y = x - if f.ndim == 2: - y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - else: - y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, 1.0) - y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, gain) - ctx.save_for_backward(f) - ctx.x_shape = x.shape - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - f, = ctx.saved_tensors - _, _, ih, iw = ctx.x_shape - _, _, oh, ow = dy.shape - fw, fh = _get_filter_size(f) - p = [ - fw - padx0 - 1, - iw * upx - ow * downx + padx0 - upx + 1, - fh - pady0 - 1, - ih * upy - oh * downy + pady0 - upy + 1, - ] - dx = None - df = None - - if ctx.needs_input_grad[0]: - dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f) - - assert not ctx.needs_input_grad[1] - return dx, df - - # Add to cache. - _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda - return Upfirdn2dCuda - -#---------------------------------------------------------------------------- - -def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Filter a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape matches the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + fw // 2, - padx1 + (fw - 1) // 2, - pady0 + fh // 2, - pady1 + (fh - 1) // 2, - ] - return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Upsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a multiple of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - upx, upy = _parse_scaling(up) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw + upx - 1) // 2, - padx1 + (fw - upx) // 2, - pady0 + (fh + upy - 1) // 2, - pady1 + (fh - upy) // 2, - ] - return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl) - -#---------------------------------------------------------------------------- - -def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Downsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a fraction of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the input. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw - downx + 1) // 2, - padx1 + (fw - downx) // 2, - pady0 + (fh - downy + 1) // 2, - pady1 + (fh - downy) // 2, - ] - return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/constants.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/constants.py deleted file mode 100644 index baaebbae71058fbb4faed35fd00e7559305dc409..0000000000000000000000000000000000000000 --- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/constants.py +++ /dev/null @@ -1,6 +0,0 @@ -import enum - - -class UploadTarget(enum.Enum): - PERSONAL_PROFILE = 'Personal Profile' - LORA_LIBRARY = 'LoRA Library' diff --git a/spaces/Edisonymy/buy-or-rent/src/plot.py b/spaces/Edisonymy/buy-or-rent/src/plot.py deleted file mode 100644 index 8afb92fd057a6467e9d4430554e8f74ebf7aefdd..0000000000000000000000000000000000000000 --- a/spaces/Edisonymy/buy-or-rent/src/plot.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st -import numpy as np -import matplotlib -import matplotlib.pyplot as plt -import matplotlib.ticker as mticker -import seaborn as sns -matplotlib.rcParams["axes.formatter.limits"] = (-99, 99) - - -def plot_hist_from_list(arrays, st, figsize=(7, 2), main_colors = ['green'], secondary_color = 'red', legends = None, title = 'Net Present Value Probability Distribution', xlabel = 'Net Present Value For Property Purchase', plot_below_zero = False, clip = None): - fig, ax = plt.subplots(figsize=figsize) - x_lower = [] - x_higher = [] - for num, array in enumerate(arrays): - # Plot the entire KDE plot in one color - sns.histplot(data=array, ax=ax, color=main_colors[num-1],alpha=0.3,linewidth=0.2) - # Plot the shaded area to the left of 0 in a different color - x_mean = np.mean(array) - x_std = np.std(array) - x_lower.append(x_mean-3*x_std) - x_higher.append(x_mean+3*x_std) - - # Set the axis limits based on the 95th percentile - ax.xaxis.set_major_formatter(mticker.FuncFormatter(format_with_commas)) - ax.set_xlim(np.min(x_lower), np.max(x_higher)) - ax.set_xlabel(xlabel) - ax.set_ylabel("Frequency") - ax.set_xticklabels(ax.get_xticklabels(), rotation=45) - ax.set_title(title) - if legends: - plt.legend(labels=legends) - st.pyplot(fig) - -### UNUSED ### -def format_with_commas(x, pos): - return '{:,.0f}'.format(x) - - -def graph_kde_plots(results_df, FEATURES, num_cols = 2): - - # Calculate the number of rows and columns needed for subplots - num_features = len(FEATURES) - num_rows = (num_features + num_cols - 1) // num_cols - # Create a figure and axis for subplots - fig, axes = plt.subplots(num_rows, num_cols, figsize=(15, 5 * num_rows)) - - # Flatten the axes if necessary (in case there's only one row) - if num_rows == 1: - axes = axes.reshape(1, -1) - # Loop through each feature and plot it - for i, feature in enumerate(FEATURES): - row = i // num_cols - col = i % num_cols - ax = axes[row, col] - - sns.kdeplot(data=results_df, x=feature, y="buying_npv", bw_adjust = 1.5, ax=ax, fill=True) - ax.set_title(f"{feature} vs. buying_npv") - ax.set_ylabel("buying_npv") - ax.set_xlabel(feature) - # Calculate the 95th percentile for x and y axes - x_low_percentile = np.percentile(results_df[feature], 0.1) - y_low_percentile = np.percentile(results_df['buying_npv'], 0.1) - x_high_percentile = np.percentile(results_df[feature], 99.9) - y_high_percentile = np.percentile(results_df['buying_npv'], 99.9) - - # Set the axis limits based on the 95th percentile - ax.set_xlim(x_low_percentile, x_high_percentile) - ax.set_ylim(y_low_percentile, y_high_percentile) - ax.yaxis.set_major_formatter(mticker.FuncFormatter(format_with_commas)) - - # ax.set_xticklabels(ax.get_xticklabels(), rotation=45) # Adjust the rotation angle as needed - - # Remove any empty subplots - for i in range(len(FEATURES), num_rows * num_cols): - fig.delaxes(axes.flatten()[i]) - - # Adjust spacing between subplots - plt.tight_layout() - - # Show the plots - plt.show() - st.pyplot(fig) \ No newline at end of file diff --git a/spaces/Epitech/MLOps/README.md b/spaces/Epitech/MLOps/README.md deleted file mode 100644 index 2b82f0b9137468f28cab38b74bcd665a6f715b16..0000000000000000000000000000000000000000 --- a/spaces/Epitech/MLOps/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MLOps -emoji: 👀 -colorFrom: green -colorTo: blue -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/EricaCorral/Chinese-To-English-Tools/README.md b/spaces/EricaCorral/Chinese-To-English-Tools/README.md deleted file mode 100644 index 0da3e0c634f940a7df5fcec2abda272144442554..0000000000000000000000000000000000000000 --- a/spaces/EricaCorral/Chinese-To-English-Tools/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chinese To English Tools -emoji: 🌖 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules.py deleted file mode 100644 index 2201a58bee9b7808d386b3ef9ac2d1f9630e56ef..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules.py +++ /dev/null @@ -1,521 +0,0 @@ -import copy -import math - -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, weight_norm - -from infer.lib.infer_pack import commons -from infer.lib.infer_pack.commons import get_padding, init_weights -from infer.lib.infer_pack.transforms import piecewise_rational_quadratic_transform - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/abinet_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/abinet_pipeline.py deleted file mode 100644 index 3a54dfe6a8c310ab74f9a01b4671d7288436d0a7..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/abinet_pipeline.py +++ /dev/null @@ -1,96 +0,0 @@ -img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=32, - min_width=128, - max_width=128, - keep_aspect_ratio=False, - width_downsample_ratio=0.25), - dict( - type='RandomWrapper', - p=0.5, - transforms=[ - dict( - type='OneOfWrapper', - transforms=[ - dict( - type='RandomRotateTextDet', - max_angle=15, - ), - dict( - type='TorchVisionWrapper', - op='RandomAffine', - degrees=15, - translate=(0.3, 0.3), - scale=(0.5, 2.), - shear=(-45, 45), - ), - dict( - type='TorchVisionWrapper', - op='RandomPerspective', - distortion_scale=0.5, - p=1, - ), - ]) - ], - ), - dict( - type='RandomWrapper', - p=0.25, - transforms=[ - dict(type='PyramidRescale'), - dict( - type='Albu', - transforms=[ - dict(type='GaussNoise', var_limit=(20, 20), p=0.5), - dict(type='MotionBlur', blur_limit=6, p=0.5), - ]), - ]), - dict( - type='RandomWrapper', - p=0.25, - transforms=[ - dict( - type='TorchVisionWrapper', - op='ColorJitter', - brightness=0.5, - saturation=0.5, - contrast=0.5, - hue=0.1), - ]), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'img_shape', 'text', 'valid_ratio', - 'resize_shape' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiRotateAugOCR', - rotate_degrees=[0, 90, 270], - transforms=[ - dict( - type='ResizeOCR', - height=32, - min_width=128, - max_width=128, - keep_aspect_ratio=False, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'img_shape', 'valid_ratio', - 'resize_shape', 'img_norm_cfg', 'ori_filename' - ]), - ]) -] diff --git a/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/app.py b/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/Fernando22/freegpt-webui/client/css/field.css b/spaces/Fernando22/freegpt-webui/client/css/field.css deleted file mode 100644 index 914425a75d9e62e6428bdb8f5de2c66c91f10d33..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/client/css/field.css +++ /dev/null @@ -1,11 +0,0 @@ -.field { - display: flex; - align-items: center; - padding: 4px; -} - -@media screen and (max-width: 990px) { - .field { - flex-wrap: nowrap; - } -} diff --git a/spaces/GIZ/SDSN-demo/ver0.1 scripts/sdg_analysis.py b/spaces/GIZ/SDSN-demo/ver0.1 scripts/sdg_analysis.py deleted file mode 100644 index e2f7ba1b63eaac66eac0d590b87e094a99519055..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/ver0.1 scripts/sdg_analysis.py +++ /dev/null @@ -1,160 +0,0 @@ -# set path -import glob, os, sys; -sys.path.append('../udfPreprocess') - -#import helper - - -#import needed libraries -import seaborn as sns -import matplotlib.pyplot as plt -import numpy as np -import streamlit as st -import docx -from docx.shared import Inches -from docx.shared import Pt -from docx.enum.style import WD_STYLE_TYPE -from udfPreprocess.sdg_classifier import sdg_classification -from udfPreprocess.sdg_classifier import runSDGPreprocessingPipeline -import configparser -import tempfile -import sqlite3 -import logging -logger = logging.getLogger(__name__) - - - -def app(): - - with st.container(): - st.markdown("

    SDSN x GIZ Policy Action Tracking v0.1

    ", unsafe_allow_html=True) - st.write(' ') - st.write(' ') - - with st.expander("ℹ️ - About this app", expanded=False): - - st.write( - """ - The *Analyse Policy Document* app is an easy-to-use interface built in Streamlit for analyzing policy documents with respect to SDG Classification for the paragraphs/texts in the document - developed by GIZ Data and the Sustainable Development Solution Network. \n - """) - st.markdown("") - - - with st.container(): - - - - if 'filepath' in st.session_state: - paraList = runSDGPreprocessingPipeline() - with st.spinner("Running SDG"): - - df, x = sdg_classification(paraList) - - - # classifier = load_sdgClassifier() - - # labels = classifier(par_list) - # labels_= [(l['label'],l['score']) for l in labels] - # df2 = DataFrame(labels_, columns=["SDG", "Relevancy"]) - # df2['text'] = par_list - # df2 = df2.sort_values(by="Relevancy", ascending=False).reset_index(drop=True) - # df2.index += 1 - # df2 =df2[df2['Relevancy']>.85] - # x = df2['SDG'].value_counts() - # df3 = df2.copy() - - plt.rcParams['font.size'] = 25 - colors = plt.get_cmap('Blues')(np.linspace(0.2, 0.7, len(x))) - # plot - fig, ax = plt.subplots() - ax.pie(x, colors=colors, radius=2, center=(4, 4), - wedgeprops={"linewidth": 1, "edgecolor": "white"}, frame=False,labels =list(x.index)) - # fig.savefig('temp.png', bbox_inches='tight',dpi= 100) - st.markdown("#### Anything related to SDGs? ####") - - # st.markdown("#### 🎈 Anything related to SDGs? ####") - - c4, c5, c6 = st.columns([2, 2, 2]) - - # Add styling - cmGreen = sns.light_palette("green", as_cmap=True) - cmRed = sns.light_palette("red", as_cmap=True) - # df2 = df2.style.background_gradient( - # cmap=cmGreen, - # subset=[ - # "Relevancy", - # ], - # ) - - # format_dictionary = { - # "Relevancy": "{:.1%}", - # } - - # df2 = df2.format(format_dictionary) - - with c5: - st.pyplot(fig) - - c7, c8, c9 = st.columns([1, 10, 1]) - with c8: - st.table(df) - - -# 1. Keyword heatmap \n - # 2. SDG Classification for the paragraphs/texts in the document - # - - # with st.container(): - # if 'docs' in st.session_state: - # docs = st.session_state['docs'] - # docs_processed, df, all_text, par_list = clean.preprocessingForSDG(docs) - # # paraList = st.session_state['paraList'] - # logging.info("keybert") - # with st.spinner("Running Key bert"): - - # kw_model = load_keyBert() - - # keywords = kw_model.extract_keywords( - # all_text, - # keyphrase_ngram_range=(1, 3), - # use_mmr=True, - # stop_words="english", - # top_n=10, - # diversity=0.7, - # ) - - # st.markdown("## 🎈 What is my document about?") - - # df = ( - # DataFrame(keywords, columns=["Keyword/Keyphrase", "Relevancy"]) - # .sort_values(by="Relevancy", ascending=False) - # .reset_index(drop=True) - # ) - # df1 = ( - # DataFrame(keywords, columns=["Keyword/Keyphrase", "Relevancy"]) - # .sort_values(by="Relevancy", ascending=False) - # .reset_index(drop=True) - # ) - # df.index += 1 - - # # Add styling - # cmGreen = sns.light_palette("green", as_cmap=True) - # cmRed = sns.light_palette("red", as_cmap=True) - # df = df.style.background_gradient( - # cmap=cmGreen, - # subset=[ - # "Relevancy", - # ], - # ) - - # c1, c2, c3 = st.columns([1, 3, 1]) - - # format_dictionary = { - # "Relevancy": "{:.1%}", - # } - - # df = df.format(format_dictionary) - - # with c2: - # - # st.table(df) \ No newline at end of file diff --git a/spaces/GMFTBY/PandaGPT/model/ImageBind/CONTRIBUTING.md b/spaces/GMFTBY/PandaGPT/model/ImageBind/CONTRIBUTING.md deleted file mode 100644 index 63d0b751e8a00b606ddff92e2524faa3c90a63b0..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/model/ImageBind/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to ImageBind -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to Omnivore, you agree that your contributions will be licensed -under the [LICENSE](LICENSE) file in the root directory of this source tree. diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/user_dict.py b/spaces/GaenKoki/voicevox/voicevox_engine/user_dict.py deleted file mode 100644 index 819059bc529f8df52411ad94892b12eacc3b270c..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/user_dict.py +++ /dev/null @@ -1,298 +0,0 @@ -import json -import sys -import threading -import traceback -from pathlib import Path -from typing import Dict, List, Optional -from uuid import UUID, uuid4 - -import numpy as np -import pyopenjtalk -from fastapi import HTTPException -from pydantic import conint - -from .model import UserDictWord, WordTypes -from .part_of_speech_data import MAX_PRIORITY, MIN_PRIORITY, part_of_speech_data -from .utility import engine_root, get_save_dir, mutex_wrapper - -root_dir = engine_root() -save_dir = get_save_dir() - -if not save_dir.is_dir(): - save_dir.mkdir(parents=True) - -default_dict_path = root_dir / "default.csv" -user_dict_path = save_dir / "user_dict.json" -compiled_dict_path = save_dir / "user.dic" - - -mutex_user_dict = threading.Lock() -mutex_openjtalk_dict = threading.Lock() - - -@mutex_wrapper(mutex_user_dict) -def write_to_json(user_dict: Dict[str, UserDictWord], user_dict_path: Path): - converted_user_dict = {} - for word_uuid, word in user_dict.items(): - word_dict = word.dict() - word_dict["cost"] = priority2cost( - word_dict["context_id"], word_dict["priority"] - ) - del word_dict["priority"] - converted_user_dict[word_uuid] = word_dict - # 予めjsonに変換できることを確かめる - user_dict_json = json.dumps(converted_user_dict, ensure_ascii=False) - user_dict_path.write_text(user_dict_json, encoding="utf-8") - - -@mutex_wrapper(mutex_openjtalk_dict) -def update_dict( - default_dict_path: Path = default_dict_path, - user_dict_path: Path = user_dict_path, - compiled_dict_path: Path = compiled_dict_path, -): - random_string = uuid4() - tmp_csv_path = save_dir / f".tmp.dict_csv-{random_string}" - tmp_compiled_path = save_dir / f".tmp.dict_compiled-{random_string}" - - try: - # 辞書.csvを作成 - csv_text = "" - if not default_dict_path.is_file(): - print("Warning: Cannot find default dictionary.", file=sys.stderr) - return - default_dict = default_dict_path.read_text(encoding="utf-8") - if default_dict == default_dict.rstrip(): - default_dict += "\n" - csv_text += default_dict - user_dict = read_dict(user_dict_path=user_dict_path) - for word_uuid in user_dict: - word = user_dict[word_uuid] - csv_text += ( - "{surface},{context_id},{context_id},{cost},{part_of_speech}," - + "{part_of_speech_detail_1},{part_of_speech_detail_2}," - + "{part_of_speech_detail_3},{inflectional_type}," - + "{inflectional_form},{stem},{yomi},{pronunciation}," - + "{accent_type}/{mora_count},{accent_associative_rule}\n" - ).format( - surface=word.surface, - context_id=word.context_id, - cost=priority2cost(word.context_id, word.priority), - part_of_speech=word.part_of_speech, - part_of_speech_detail_1=word.part_of_speech_detail_1, - part_of_speech_detail_2=word.part_of_speech_detail_2, - part_of_speech_detail_3=word.part_of_speech_detail_3, - inflectional_type=word.inflectional_type, - inflectional_form=word.inflectional_form, - stem=word.stem, - yomi=word.yomi, - pronunciation=word.pronunciation, - accent_type=word.accent_type, - mora_count=word.mora_count, - accent_associative_rule=word.accent_associative_rule, - ) - tmp_csv_path.write_text(csv_text, encoding="utf-8") - - # 辞書.csvをOpenJTalk用にコンパイル - pyopenjtalk.create_user_dict(str(tmp_csv_path), str(tmp_compiled_path)) - if not tmp_compiled_path.is_file(): - raise RuntimeError("辞書のコンパイル時にエラーが発生しました。") - - # コンパイル済み辞書の置き換え・読み込み - pyopenjtalk.unset_user_dict() - tmp_compiled_path.replace(compiled_dict_path) - if compiled_dict_path.is_file(): - pyopenjtalk.set_user_dict(str(compiled_dict_path.resolve(strict=True))) - - except Exception as e: - print("Error: Failed to update dictionary.", file=sys.stderr) - traceback.print_exc(file=sys.stderr) - raise e - - finally: - # 後処理 - if tmp_csv_path.exists(): - tmp_csv_path.unlink() - if tmp_compiled_path.exists(): - tmp_compiled_path.unlink() - - -@mutex_wrapper(mutex_user_dict) -def read_dict(user_dict_path: Path = user_dict_path) -> Dict[str, UserDictWord]: - if not user_dict_path.is_file(): - return {} - with user_dict_path.open(encoding="utf-8") as f: - result = {} - for word_uuid, word in json.load(f).items(): - # cost2priorityで変換を行う際にcontext_idが必要となるが、 - # 0.12以前の辞書は、context_idがハードコーディングされていたためにユーザー辞書内に保管されていない - # ハードコーディングされていたcontext_idは固有名詞を意味するものなので、固有名詞のcontext_idを補完する - if word.get("context_id") is None: - word["context_id"] = part_of_speech_data[ - WordTypes.PROPER_NOUN - ].context_id - word["priority"] = cost2priority(word["context_id"], word["cost"]) - del word["cost"] - result[str(UUID(word_uuid))] = UserDictWord(**word) - - return result - - -def create_word( - surface: str, - pronunciation: str, - accent_type: int, - word_type: Optional[WordTypes] = None, - priority: Optional[int] = None, -) -> UserDictWord: - if word_type is None: - word_type = WordTypes.PROPER_NOUN - if word_type not in part_of_speech_data.keys(): - raise HTTPException(status_code=422, detail="不明な品詞です") - if priority is None: - priority = 5 - if not MIN_PRIORITY <= priority <= MAX_PRIORITY: - raise HTTPException(status_code=422, detail="優先度の値が無効です") - pos_detail = part_of_speech_data[word_type] - return UserDictWord( - surface=surface, - context_id=pos_detail.context_id, - priority=priority, - part_of_speech=pos_detail.part_of_speech, - part_of_speech_detail_1=pos_detail.part_of_speech_detail_1, - part_of_speech_detail_2=pos_detail.part_of_speech_detail_2, - part_of_speech_detail_3=pos_detail.part_of_speech_detail_3, - inflectional_type="*", - inflectional_form="*", - stem="*", - yomi=pronunciation, - pronunciation=pronunciation, - accent_type=accent_type, - accent_associative_rule="*", - ) - - -def apply_word( - surface: str, - pronunciation: str, - accent_type: int, - word_type: Optional[WordTypes] = None, - priority: Optional[int] = None, - user_dict_path: Path = user_dict_path, - compiled_dict_path: Path = compiled_dict_path, -) -> str: - word = create_word( - surface=surface, - pronunciation=pronunciation, - accent_type=accent_type, - word_type=word_type, - priority=priority, - ) - user_dict = read_dict(user_dict_path=user_dict_path) - word_uuid = str(uuid4()) - user_dict[word_uuid] = word - write_to_json(user_dict, user_dict_path) - update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path) - return word_uuid - - -def rewrite_word( - word_uuid: str, - surface: str, - pronunciation: str, - accent_type: int, - word_type: Optional[WordTypes] = None, - priority: Optional[int] = None, - user_dict_path: Path = user_dict_path, - compiled_dict_path: Path = compiled_dict_path, -): - word = create_word( - surface=surface, - pronunciation=pronunciation, - accent_type=accent_type, - word_type=word_type, - priority=priority, - ) - user_dict = read_dict(user_dict_path=user_dict_path) - if word_uuid not in user_dict: - raise HTTPException(status_code=422, detail="UUIDに該当するワードが見つかりませんでした") - user_dict[word_uuid] = word - write_to_json(user_dict, user_dict_path) - update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path) - - -def delete_word( - word_uuid: str, - user_dict_path: Path = user_dict_path, - compiled_dict_path: Path = compiled_dict_path, -): - user_dict = read_dict(user_dict_path=user_dict_path) - if word_uuid not in user_dict: - raise HTTPException(status_code=422, detail="IDに該当するワードが見つかりませんでした") - del user_dict[word_uuid] - write_to_json(user_dict, user_dict_path) - update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path) - - -def import_user_dict( - dict_data: Dict[str, UserDictWord], - override: bool = False, - user_dict_path: Path = user_dict_path, - default_dict_path: Path = default_dict_path, - compiled_dict_path: Path = compiled_dict_path, -): - # 念のため型チェックを行う - for word_uuid, word in dict_data.items(): - UUID(word_uuid) - assert type(word) == UserDictWord - for pos_detail in part_of_speech_data.values(): - if word.context_id == pos_detail.context_id: - assert word.part_of_speech == pos_detail.part_of_speech - assert ( - word.part_of_speech_detail_1 == pos_detail.part_of_speech_detail_1 - ) - assert ( - word.part_of_speech_detail_2 == pos_detail.part_of_speech_detail_2 - ) - assert ( - word.part_of_speech_detail_3 == pos_detail.part_of_speech_detail_3 - ) - assert ( - word.accent_associative_rule in pos_detail.accent_associative_rules - ) - break - else: - raise ValueError("対応していない品詞です") - old_dict = read_dict(user_dict_path=user_dict_path) - if override: - new_dict = {**old_dict, **dict_data} - else: - new_dict = {**dict_data, **old_dict} - write_to_json(user_dict=new_dict, user_dict_path=user_dict_path) - update_dict( - default_dict_path=default_dict_path, - user_dict_path=user_dict_path, - compiled_dict_path=compiled_dict_path, - ) - - -def search_cost_candidates(context_id: int) -> List[int]: - for value in part_of_speech_data.values(): - if value.context_id == context_id: - return value.cost_candidates - raise HTTPException(status_code=422, detail="品詞IDが不正です") - - -def cost2priority(context_id: int, cost: conint(ge=-32768, le=32767)) -> int: - cost_candidates = search_cost_candidates(context_id) - # cost_candidatesの中にある値で最も近い値を元にpriorityを返す - # 参考: https://qiita.com/Krypf/items/2eada91c37161d17621d - # この関数とpriority2cost関数によって、辞書ファイルのcostを操作しても最も近いpriorityのcostに上書きされる - return MAX_PRIORITY - np.argmin(np.abs(np.array(cost_candidates) - cost)) - - -def priority2cost( - context_id: int, priority: conint(ge=MIN_PRIORITY, le=MAX_PRIORITY) -) -> int: - cost_candidates = search_cost_candidates(context_id) - return cost_candidates[MAX_PRIORITY - priority] diff --git a/spaces/Gasi/White-box-Cartoonization/wbc/guided_filter.py b/spaces/Gasi/White-box-Cartoonization/wbc/guided_filter.py deleted file mode 100644 index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000 --- a/spaces/Gasi/White-box-Cartoonization/wbc/guided_filter.py +++ /dev/null @@ -1,87 +0,0 @@ -import tensorflow as tf -import numpy as np - - - - -def tf_box_filter(x, r): - k_size = int(2*r+1) - ch = x.get_shape().as_list()[-1] - weight = 1/(k_size**2) - box_kernel = weight*np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - - -def guided_filter(x, y, r, eps=1e-2): - - x_shape = tf.shape(x) - #y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - - #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - #lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -if __name__ == '__main__': - import cv2 - from tqdm import tqdm - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3]) - output = guided_filter(input_photo, input_photo, 5, eps=1) - image = cv2.imread('output_figure1/cartoon2.jpg') - image = image/127.5 - 1 - image = np.expand_dims(image, axis=0) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - sess.run(tf.global_variables_initializer()) - - out = sess.run(output, feed_dict={input_photo: image}) - out = (np.squeeze(out)+1)*127.5 - out = np.clip(out, 0, 255).astype(np.uint8) - cv2.imwrite('output_figure1/cartoon2_filter.jpg', out) diff --git a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/resample.py b/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/resample.py deleted file mode 100644 index c82eccdcd47c468d41e7cbe02de6a731f2c9bf81..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/resample.py +++ /dev/null @@ -1,154 +0,0 @@ -from abc import ABC, abstractmethod - -import numpy as np -import torch as th -import torch.distributed as dist - - -def create_named_schedule_sampler(name, diffusion): - """ - Create a ScheduleSampler from a library of pre-defined samplers. - - :param name: the name of the sampler. - :param diffusion: the diffusion object to sample for. - """ - if name == "uniform": - return UniformSampler(diffusion) - elif name == "loss-second-moment": - return LossSecondMomentResampler(diffusion) - else: - raise NotImplementedError(f"unknown schedule sampler: {name}") - - -class ScheduleSampler(ABC): - """ - A distribution over timesteps in the diffusion process, intended to reduce - variance of the objective. - - By default, samplers perform unbiased importance sampling, in which the - objective's mean is unchanged. - However, subclasses may override sample() to change how the resampled - terms are reweighted, allowing for actual changes in the objective. - """ - - @abstractmethod - def weights(self): - """ - Get a numpy array of weights, one per diffusion step. - - The weights needn't be normalized, but must be positive. - """ - - def sample(self, batch_size, device): - """ - Importance-sample timesteps for a batch. - - :param batch_size: the number of timesteps. - :param device: the torch device to save to. - :return: a tuple (timesteps, weights): - - timesteps: a tensor of timestep indices. - - weights: a tensor of weights to scale the resulting losses. - """ - w = self.weights() - p = w / np.sum(w) - indices_np = np.random.choice(len(p), size=(batch_size,), p=p) - indices = th.from_numpy(indices_np).long().to(device) - weights_np = 1 / (len(p) * p[indices_np]) - weights = th.from_numpy(weights_np).float().to(device) - return indices, weights - - -class UniformSampler(ScheduleSampler): - def __init__(self, diffusion): - self.diffusion = diffusion - self._weights = np.ones([diffusion.num_timesteps]) - - def weights(self): - return self._weights - - -class LossAwareSampler(ScheduleSampler): - def update_with_local_losses(self, local_ts, local_losses): - """ - Update the reweighting using losses from a model. - - Call this method from each rank with a batch of timesteps and the - corresponding losses for each of those timesteps. - This method will perform synchronization to make sure all of the ranks - maintain the exact same reweighting. - - :param local_ts: an integer Tensor of timesteps. - :param local_losses: a 1D Tensor of losses. - """ - batch_sizes = [ - th.tensor([0], dtype=th.int32, device=local_ts.device) - for _ in range(dist.get_world_size()) - ] - dist.all_gather( - batch_sizes, - th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device), - ) - - # Pad all_gather batches to be the maximum batch size. - batch_sizes = [x.item() for x in batch_sizes] - max_bs = max(batch_sizes) - - timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes] - loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes] - dist.all_gather(timestep_batches, local_ts) - dist.all_gather(loss_batches, local_losses) - timesteps = [ - x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs] - ] - losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]] - self.update_with_all_losses(timesteps, losses) - - @abstractmethod - def update_with_all_losses(self, ts, losses): - """ - Update the reweighting using losses from a model. - - Sub-classes should override this method to update the reweighting - using losses from the model. - - This method directly updates the reweighting without synchronizing - between workers. It is called by update_with_local_losses from all - ranks with identical arguments. Thus, it should have deterministic - behavior to maintain state across workers. - - :param ts: a list of int timesteps. - :param losses: a list of float losses, one per timestep. - """ - - -class LossSecondMomentResampler(LossAwareSampler): - def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001): - self.diffusion = diffusion - self.history_per_term = history_per_term - self.uniform_prob = uniform_prob - self._loss_history = np.zeros( - [diffusion.num_timesteps, history_per_term], dtype=np.float64 - ) - self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int) - - def weights(self): - if not self._warmed_up(): - return np.ones([self.diffusion.num_timesteps], dtype=np.float64) - weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1)) - weights /= np.sum(weights) - weights *= 1 - self.uniform_prob - weights += self.uniform_prob / len(weights) - return weights - - def update_with_all_losses(self, ts, losses): - for t, loss in zip(ts, losses): - if self._loss_counts[t] == self.history_per_term: - # Shift out the oldest loss term. - self._loss_history[t, :-1] = self._loss_history[t, 1:] - self._loss_history[t, -1] = loss - else: - self._loss_history[t, self._loss_counts[t]] = loss - self._loss_counts[t] += 1 - - def _warmed_up(self): - return (self._loss_counts == self.history_per_term).all() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py deleted file mode 100644 index 20bffb95616d4358007d0825820f4a91ea223649..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(18, 36)), - stage3=dict(num_channels=(18, 36, 72)), - stage4=dict(num_channels=(18, 36, 72, 144)))), - neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py deleted file mode 100644 index 439c39a93a8a12119ffa408987c8cea6d8cb313a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/dist_train.sh b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/dist_train.sh deleted file mode 100644 index 5b43fffbf28fc9b8ba7c14efcd5e4f8b19279470..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/dist_train.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -PORT=${PORT:-29500} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \ - $(dirname "$0")/train.py $CONFIG --launcher pytorch ${@:3} diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/data_loaders.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/data_loaders.py deleted file mode 100644 index bf18572329019d7a8f1df01799eda207c16dd7ff..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/diffusion/data_loaders.py +++ /dev/null @@ -1,284 +0,0 @@ -import os -import random -import re -import numpy as np -import librosa -import torch -import random -from utils import repeat_expand_2d -from tqdm import tqdm -from torch.utils.data import Dataset - -def traverse_dir( - root_dir, - extensions, - amount=None, - str_include=None, - str_exclude=None, - is_pure=False, - is_sort=False, - is_ext=True): - - file_list = [] - cnt = 0 - for root, _, files in os.walk(root_dir): - for file in files: - if any([file.endswith(f".{ext}") for ext in extensions]): - # path - mix_path = os.path.join(root, file) - pure_path = mix_path[len(root_dir)+1:] if is_pure else mix_path - - # amount - if (amount is not None) and (cnt == amount): - if is_sort: - file_list.sort() - return file_list - - # check string - if (str_include is not None) and (str_include not in pure_path): - continue - if (str_exclude is not None) and (str_exclude in pure_path): - continue - - if not is_ext: - ext = pure_path.split('.')[-1] - pure_path = pure_path[:-(len(ext)+1)] - file_list.append(pure_path) - cnt += 1 - if is_sort: - file_list.sort() - return file_list - - -def get_data_loaders(args, whole_audio=False): - data_train = AudioDataset( - filelists = args.data.training_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=whole_audio, - extensions=args.data.extensions, - n_spk=args.model.n_spk, - spk=args.spk, - device=args.train.cache_device, - fp16=args.train.cache_fp16, - use_aug=True) - loader_train = torch.utils.data.DataLoader( - data_train , - batch_size=args.train.batch_size if not whole_audio else 1, - shuffle=True, - num_workers=args.train.num_workers if args.train.cache_device=='cpu' else 0, - persistent_workers=(args.train.num_workers > 0) if args.train.cache_device=='cpu' else False, - pin_memory=True if args.train.cache_device=='cpu' else False - ) - data_valid = AudioDataset( - filelists = args.data.validation_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=True, - spk=args.spk, - extensions=args.data.extensions, - n_spk=args.model.n_spk) - loader_valid = torch.utils.data.DataLoader( - data_valid, - batch_size=1, - shuffle=False, - num_workers=0, - pin_memory=True - ) - return loader_train, loader_valid - - -class AudioDataset(Dataset): - def __init__( - self, - filelists, - waveform_sec, - hop_size, - sample_rate, - spk, - load_all_data=True, - whole_audio=False, - extensions=['wav'], - n_spk=1, - device='cpu', - fp16=False, - use_aug=False, - ): - super().__init__() - - self.waveform_sec = waveform_sec - self.sample_rate = sample_rate - self.hop_size = hop_size - self.filelists = filelists - self.whole_audio = whole_audio - self.use_aug = use_aug - self.data_buffer={} - self.pitch_aug_dict = {} - # np.load(os.path.join(self.path_root, 'pitch_aug_dict.npy'), allow_pickle=True).item() - if load_all_data: - print('Load all the data filelists:', filelists) - else: - print('Load the f0, volume data filelists:', filelists) - with open(filelists,"r") as f: - self.paths = f.read().splitlines() - for name_ext in tqdm(self.paths, total=len(self.paths)): - name = os.path.splitext(name_ext)[0] - path_audio = name_ext - duration = librosa.get_duration(filename = path_audio, sr = self.sample_rate) - - path_f0 = name_ext + ".f0.npy" - f0,_ = np.load(path_f0,allow_pickle=True) - f0 = torch.from_numpy(np.array(f0,dtype=float)).float().unsqueeze(-1).to(device) - - path_volume = name_ext + ".vol.npy" - volume = np.load(path_volume) - volume = torch.from_numpy(volume).float().unsqueeze(-1).to(device) - - path_augvol = name_ext + ".aug_vol.npy" - aug_vol = np.load(path_augvol) - aug_vol = torch.from_numpy(aug_vol).float().unsqueeze(-1).to(device) - - if n_spk is not None and n_spk > 1: - spk_name = name_ext.split("/")[-2] - spk_id = spk[spk_name] if spk_name in spk else 0 - if spk_id < 0 or spk_id >= n_spk: - raise ValueError(' [x] Muiti-speaker traing error : spk_id must be a positive integer from 0 to n_spk-1 ') - else: - spk_id = 0 - spk_id = torch.LongTensor(np.array([spk_id])).to(device) - - if load_all_data: - ''' - audio, sr = librosa.load(path_audio, sr=self.sample_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).to(device) - ''' - path_mel = name_ext + ".mel.npy" - mel = np.load(path_mel) - mel = torch.from_numpy(mel).to(device) - - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - aug_mel = np.array(aug_mel,dtype=float) - aug_mel = torch.from_numpy(aug_mel).to(device) - self.pitch_aug_dict[name_ext] = keyshift - - path_units = name_ext + ".soft.pt" - units = torch.load(path_units).to(device) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - if fp16: - mel = mel.half() - aug_mel = aug_mel.half() - units = units.half() - - self.data_buffer[name_ext] = { - 'duration': duration, - 'mel': mel, - 'aug_mel': aug_mel, - 'units': units, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - else: - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - self.pitch_aug_dict[name_ext] = keyshift - self.data_buffer[name_ext] = { - 'duration': duration, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - - - def __getitem__(self, file_idx): - name_ext = self.paths[file_idx] - data_buffer = self.data_buffer[name_ext] - # check duration. if too short, then skip - if data_buffer['duration'] < (self.waveform_sec + 0.1): - return self.__getitem__( (file_idx + 1) % len(self.paths)) - - # get item - return self.get_data(name_ext, data_buffer) - - def get_data(self, name_ext, data_buffer): - name = os.path.splitext(name_ext)[0] - frame_resolution = self.hop_size / self.sample_rate - duration = data_buffer['duration'] - waveform_sec = duration if self.whole_audio else self.waveform_sec - - # load audio - idx_from = 0 if self.whole_audio else random.uniform(0, duration - waveform_sec - 0.1) - start_frame = int(idx_from / frame_resolution) - units_frame_len = int(waveform_sec / frame_resolution) - aug_flag = random.choice([True, False]) and self.use_aug - ''' - audio = data_buffer.get('audio') - if audio is None: - path_audio = os.path.join(self.path_root, 'audio', name) + '.wav' - audio, sr = librosa.load( - path_audio, - sr = self.sample_rate, - offset = start_frame * frame_resolution, - duration = waveform_sec) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - # clip audio into N seconds - audio = audio[ : audio.shape[-1] // self.hop_size * self.hop_size] - audio = torch.from_numpy(audio).float() - else: - audio = audio[start_frame * self.hop_size : (start_frame + units_frame_len) * self.hop_size] - ''' - # load mel - mel_key = 'aug_mel' if aug_flag else 'mel' - mel = data_buffer.get(mel_key) - if mel is None: - mel = name_ext + ".mel.npy" - mel = np.load(mel) - mel = mel[start_frame : start_frame + units_frame_len] - mel = torch.from_numpy(mel).float() - else: - mel = mel[start_frame : start_frame + units_frame_len] - - # load f0 - f0 = data_buffer.get('f0') - aug_shift = 0 - if aug_flag: - aug_shift = self.pitch_aug_dict[name_ext] - f0_frames = 2 ** (aug_shift / 12) * f0[start_frame : start_frame + units_frame_len] - - # load units - units = data_buffer.get('units') - if units is None: - path_units = name_ext + ".soft.pt" - units = torch.load(path_units) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - units = units[start_frame : start_frame + units_frame_len] - - # load volume - vol_key = 'aug_vol' if aug_flag else 'volume' - volume = data_buffer.get(vol_key) - volume_frames = volume[start_frame : start_frame + units_frame_len] - - # load spk_id - spk_id = data_buffer.get('spk_id') - - # load shift - aug_shift = torch.from_numpy(np.array([[aug_shift]])).float() - - return dict(mel=mel, f0=f0_frames, volume=volume_frames, units=units, spk_id=spk_id, aug_shift=aug_shift, name=name, name_ext=name_ext) - - def __len__(self): - return len(self.paths) \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/mmap_dataloader/mmap_datamodule.py b/spaces/HaloMaster/chinesesummary/fengshen/data/mmap_dataloader/mmap_datamodule.py deleted file mode 100644 index 534cfb179b649a317253685848e88aebeaea7e0f..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/mmap_dataloader/mmap_datamodule.py +++ /dev/null @@ -1,68 +0,0 @@ -from typing import Optional -from pytorch_lightning import LightningDataModule -from torch.utils.data import DataLoader -from fengshen.data.mmap_index_dataset import MMapIndexDataset - - -class MMapDataModule(LightningDataModule): - @ staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('MMAP DataModule') - parser.add_argument('--num_workers', default=8, type=int) - parser.add_argument('--train_batchsize', default=32, type=int) - parser.add_argument('--eval_batchsize', default=32, type=int) - parser.add_argument('--test_batchsize', default=32, type=int) - parser.add_argument('--train_datas', default=[ - './train_datas' - ], type=str, nargs='+') - parser.add_argument('--valid_datas', default=[ - './valid_datas' - ], type=str, nargs='+') - parser.add_argument('--test_datas', default=[ - './test_datas'], - type=str, nargs='+') - parser.add_argument('--input_tensor_name', default=['input_ids'], type=str, nargs='+') - return parent_args - - def __init__( - self, - collate_fn, - args, - **kwargs, - ): - super().__init__() - self.collate_fn = collate_fn - self.train_dataset = MMapIndexDataset(args.train_datas, args.input_tensor_name) - self.valid_dataset = MMapIndexDataset(args.valid_datas, args.input_tensor_name) - self.test_dataset = MMapIndexDataset(args.test_datas, args.input_tensor_name) - self.save_hyperparameters(args) - - def setup(self, stage: Optional[str] = None) -> None: - return super().setup(stage) - - def train_dataloader(self): - return DataLoader( - self.train_dataset, - batch_size=self.hparams.train_batchsize, - shuffle=True, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) - - def val_dataloader(self): - return DataLoader( - self.valid_dataset, - batch_size=self.hparams.eval_batchsize, - shuffle=True, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) - - def test_dataloader(self): - return DataLoader( - self.test_dataset, - batch_size=self.hparams.test_batchsize, - shuffle=True, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) diff --git a/spaces/HaoFeng2019/DocGeoNet/position_encoding.py b/spaces/HaoFeng2019/DocGeoNet/position_encoding.py deleted file mode 100644 index 66eabfe7e44794d9a7725061ba767e3942b5b975..0000000000000000000000000000000000000000 --- a/spaces/HaoFeng2019/DocGeoNet/position_encoding.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Various positional encodings for the transformer. -""" -import math -import torch -from torch import nn -from typing import List -from typing import Optional -from torch import Tensor - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, mask): - assert mask is not None - y_embed = mask.cumsum(1, dtype=torch.float32) - x_embed = mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32)#.cuda() - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - # print(pos.shape) - return pos - - -class PositionEmbeddingLearned(nn.Module): - """ - Absolute pos embedding, learned. - """ - def __init__(self, num_pos_feats=256): - super().__init__() - self.row_embed = nn.Embedding(50, num_pos_feats) - self.col_embed = nn.Embedding(50, num_pos_feats) - self.reset_parameters() - - def reset_parameters(self): - nn.init.uniform_(self.row_embed.weight) - nn.init.uniform_(self.col_embed.weight) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - h, w = x.shape[-2:] - i = torch.arange(w, device=x.device) - j = torch.arange(h, device=x.device) - x_emb = self.col_embed(i) - y_emb = self.row_embed(j) - pos = torch.cat([ - x_emb.unsqueeze(0).repeat(h, 1, 1), - y_emb.unsqueeze(1).repeat(1, w, 1), - ], dim=-1).permute(2, 0, 1).unsqueeze(0).repeat(x.shape[0], 1, 1, 1) - return pos - -def build_position_encoding(hidden_dim=512, position_embedding='sine'): - N_steps = hidden_dim // 2 - if position_embedding in ('v2', 'sine'): - position_embedding = PositionEmbeddingSine(N_steps, normalize=True) - elif position_embedding in ('v3', 'learned'): - position_embedding = PositionEmbeddingLearned(N_steps) - else: - raise ValueError(f"not supported {position_embedding}") - - return position_embedding - diff --git a/spaces/HarlanHong/DaGAN/modules/dense_motion.py b/spaces/HarlanHong/DaGAN/modules/dense_motion.py deleted file mode 100644 index 0a427423accbf8217882817a09e34c2ab42239d3..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/modules/dense_motion.py +++ /dev/null @@ -1,112 +0,0 @@ -from torch import nn -import torch.nn.functional as F -import torch -from modules.util import Hourglass, AntiAliasInterpolation2d, make_coordinate_grid, kp2gaussian -import pdb -from modules.util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d - - -class DenseMotionNetwork(nn.Module): - """ - Module that predicting a dense motion from sparse motion representation given by kp_source and kp_driving - """ - - def __init__(self, block_expansion, num_blocks, max_features, num_kp, num_channels, estimate_occlusion_map=False, - scale_factor=1, kp_variance=0.01): - super(DenseMotionNetwork, self).__init__() - self.hourglass = Hourglass(block_expansion=block_expansion, in_features=(num_kp + 1) * (num_channels + 1), - max_features=max_features, num_blocks=num_blocks) - - self.mask = nn.Conv2d(self.hourglass.out_filters, num_kp + 1, kernel_size=(7, 7), padding=(3, 3)) - - if estimate_occlusion_map: - self.occlusion = nn.Conv2d(self.hourglass.out_filters, 1, kernel_size=(7, 7), padding=(3, 3)) - else: - self.occlusion = None - - self.num_kp = num_kp - self.scale_factor = scale_factor - self.kp_variance = kp_variance - - if self.scale_factor != 1: - self.down = AntiAliasInterpolation2d(num_channels, self.scale_factor) - - def create_heatmap_representations(self, source_image, kp_driving, kp_source): - """ - Eq 6. in the paper H_k(z) - """ - spatial_size = source_image.shape[2:] - gaussian_driving = kp2gaussian(kp_driving, spatial_size=spatial_size, kp_variance=self.kp_variance) - gaussian_source = kp2gaussian(kp_source, spatial_size=spatial_size, kp_variance=self.kp_variance) - heatmap = gaussian_driving - gaussian_source - #adding background feature - zeros = torch.zeros(heatmap.shape[0], 1, spatial_size[0], spatial_size[1]).type(heatmap.type()) - heatmap = torch.cat([zeros, heatmap], dim=1) - heatmap = heatmap.unsqueeze(2) - return heatmap - - def create_sparse_motions(self, source_image, kp_driving, kp_source): - """ - Eq 4. in the paper T_{s<-d}(z) - """ - bs, _, h, w = source_image.shape - identity_grid = make_coordinate_grid((h, w), type=kp_source['value'].type()) - identity_grid = identity_grid.view(1, 1, h, w, 2) - coordinate_grid = identity_grid - kp_driving['value'].view(bs, self.num_kp, 1, 1, 2) - if 'jacobian' in kp_driving: - jacobian = torch.matmul(kp_source['jacobian'], torch.inverse(kp_driving['jacobian'])) - jacobian = jacobian.unsqueeze(-3).unsqueeze(-3) - jacobian = jacobian.repeat(1, 1, h, w, 1, 1) - coordinate_grid = torch.matmul(jacobian, coordinate_grid.unsqueeze(-1)) - coordinate_grid = coordinate_grid.squeeze(-1) - - driving_to_source = coordinate_grid + kp_source['value'].view(bs, self.num_kp, 1, 1, 2) - - #adding background feature - identity_grid = identity_grid.repeat(bs, 1, 1, 1, 1) - sparse_motions = torch.cat([identity_grid, driving_to_source], dim=1) #bs, num_kp+1,w,h,2 - return sparse_motions - - def create_deformed_source_image(self, source_image, sparse_motions): - """ - Eq 7. in the paper \hat{T}_{s<-d}(z) - """ - bs, _, h, w = source_image.shape - source_repeat = source_image.unsqueeze(1).unsqueeze(1).repeat(1, self.num_kp + 1, 1, 1, 1, 1) - source_repeat = source_repeat.view(bs * (self.num_kp + 1), -1, h, w) - sparse_motions = sparse_motions.view((bs * (self.num_kp + 1), h, w, -1)) - sparse_deformed = F.grid_sample(source_repeat, sparse_motions) - sparse_deformed = sparse_deformed.view((bs, self.num_kp + 1, -1, h, w)) - return sparse_deformed - - def forward(self, source_image, kp_driving, kp_source): - if self.scale_factor != 1: - source_image = self.down(source_image) - bs, _, h, w = source_image.shape - out_dict = dict() - heatmap_representation = self.create_heatmap_representations(source_image, kp_driving, kp_source) - sparse_motion = self.create_sparse_motions(source_image, kp_driving, kp_source) - deformed_source = self.create_deformed_source_image(source_image, sparse_motion) - out_dict['sparse_deformed'] = deformed_source - - input = torch.cat([heatmap_representation, deformed_source], dim=2) - input = input.view(bs, -1, h, w) - - prediction = self.hourglass(input) - - mask = self.mask(prediction) - mask = F.softmax(mask, dim=1) - out_dict['mask'] = mask - mask = mask.unsqueeze(2) - sparse_motion = sparse_motion.permute(0, 1, 4, 2, 3) - deformation = (sparse_motion * mask).sum(dim=1) - deformation = deformation.permute(0, 2, 3, 1) - - out_dict['deformation'] = deformation - - # Sec. 3.2 in the paper - if self.occlusion: - occlusion_map = torch.sigmoid(self.occlusion(prediction)) - out_dict['occlusion_map'] = occlusion_map - - return out_dict diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fast_noisy_channel/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fast_noisy_channel/README.md deleted file mode 100644 index f2631a8c34d11bdf7d351c6807b6fe415f5715e1..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/fast_noisy_channel/README.md +++ /dev/null @@ -1,345 +0,0 @@ -# Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling - -## Introduction -- [Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) introduce a simple and effective noisy channel modeling approach for neural machine translation. However, the noisy channel online decoding approach introduced in this paper is too slow to be practical. -- To address this, [Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 simple approximations to make this approach very fast and practical without much loss in accuracy. -- This README provides intructions on how to run online decoding or generation with the noisy channel modeling approach, including ways to make it very fast without much loss in accuracy. - -## Noisy Channel Modeling - -[Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) applies the Bayes Rule to predict `P(y|x)`, the probability of the target `y` given the source `x`. -```P(y|x) = P(x|y) * P(y) / P(x)``` -- `P(x|y)` predicts the source `x` given the target `y` and is referred to as the **channel model** -- `P(y)` is a **language model** over the target `y` -- `P(x)` is generally not modeled since it is constant for all `y`. - -We use Transformer models to parameterize the direct model `P(y|x)`, the channel model `P(x|y)` and the language model `P(y)`. - -During online decoding with beam search, we generate the top `K2` candidates per beam and score them with the following linear combination of the channel model, the language model as well as the direct model scores. - -```(1 / t) * log(P(y|x) + (1 / s) * ( λ1 * log(P(x|y)) + λ2 * log(P(y) ) )``` -- `t` - Target Prefix Length -- `s` - Source Length -- `λ1` - Channel Model Weight -- `λ2` - Language Model Weight - -The top `beam_size` candidates based on the above combined scores are chosen to continue the beams in beam search. In beam search with a direct model alone, the scores from the direct model `P(y|x)` are used to choose the top candidates in beam search. - -This framework provides a great way to utlize strong target language models trained on large amounts of unlabeled data. Language models can prefer targets unrelated to the source, so we also need a channel model whose role is to ensure that the target preferred by the language model also translates back to the source. - -### Training Translation Models and Language Models - -For training Transformer models in fairseq for machine translation, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/translation) - -For training Transformer models in fairseq for language modeling, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model) - -### Generation with Language Model for German-English translation with fairseq - -Here are instructions to generate using a direct model and a target-side language model. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt - -k2=10 -lenpen=0.16 -lm_wt=0.14 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --k2 ${k2} \ - --combine-method lm_only \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --gen-subset valid \ - --remove-bpe \ - --fp16 \ - --batch-size 10 -``` -### Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for noisy channel generation with a direct model, channel model and language model as explained in section [Noisy Channel Modeling](#noisy-channel-modeling). - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -ch_model=en_de.big.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt -O ${ch_model} - -k2=10 -lenpen=0.21 -lm_wt=0.50 -bw_wt=0.30 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 1 -``` -## Fast Noisy Channel Modeling - -[Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 approximations that speed up online noisy channel decoding - -- Smaller channel models (`Tranformer Base` with 1 encoder and decoder layer each vs. `Transformer Big`) - - This involves training a channel model that is possibly smaller and less accurate in terms of BLEU than a channel model of the same size as the direct model. - - Since the role of the channel model is mainly to assign low scores to generations from the language model if they don't translate back to the source, we may not need the most accurate channel model for this purpose. -- Smaller output vocabulary size for the channel model (~30,000 -> ~1000) - - The channel model doesn't need to score the full output vocabulary, it just needs to score the source tokens, which are completely known. - - This is specified using the arguments `--channel-scoring-type src_vocab --top-k-vocab 500` - - This means that the output vocabulary for the channel model will be the source tokens for all examples in the batch and the top-K most frequent tokens in the vocabulary - - This reduces the memory consumption needed to store channel model scores significantly -- Smaller number of candidates (`k2`) scored per beam - - This is specified by reducing the argument `--k2` - - -### Fast Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for **fast** noisy channel generation with a direct model, channel model and language model as explained in section [Fast Noisy Channel Modeling](#fast-noisy-channel-modeling). The main differences are that we use a smaller channel model, reduce `--k2`, set `--channel-scoring-type src_vocab --top-k-vocab 500` and increase the `--batch-size`. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -small_ch_model=en_de.base_1_1.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt -O ${small_ch_model} - -k2=3 -lenpen=0.23 -lm_wt=0.58 -bw_wt=0.26 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${small_ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 50 \ - --channel-scoring-type src_vocab --top-k-vocab 500 -``` - -## Test Data Preprocessing - -For preprocessing and binarizing the test sets for Romanian-English and German-English translation, we use the following script - - -```sh -FAIRSEQ=/path/to/fairseq -cd $FAIRSEQ -SCRIPTS=$FAIRSEQ/mosesdecoder/scripts -if [ ! -d "${SCRIPTS}" ]; then - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git -fi -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORMALIZE=$SCRIPTS/tokenizer/normalize-punctuation.perl - -s=de -t=en -test=wmt18 - -mkdir -p data_dir - -# Tokenization -if [ $s == "ro" ] ; then - # Note: Get normalise-romanian.py and remove-diacritics.py from - # https://github.com/rsennrich/wmt16-scripts/tree/master/preprocess - sacrebleu -t $test -l $s-$t --echo src | \ - $NORMALIZE -l $s | \ - python normalise-romanian.py | \ - python remove-diacritics.py | \ - $TOKENIZER -l $s -a -q > data_dir/$test.$s-$t.$s -else - sacrebleu -t $test -l $s-$t --echo src | perl $NORMALIZE -l $s | perl $TOKENIZER -threads 8 -a -l $s > data_dir/$test.$s-$t.$s -fi - -sacrebleu -t $test -l $s-$t --echo ref | perl $NORMALIZE -l $t | perl $TOKENIZER -threads 8 -a -l $t > data_dir/$test.$s-$t.$t - - -# Applying BPE -src_bpe_code=/path/to/source/language/bpe/code -tgt_bpe_code=/path/to/target/language/bpe/code -src_dict=/path/to/source/language/dict -tgt_dict=/path/to/target/language/dict - -FASTBPE=$FAIRSEQ/fastBPE -if [ ! -d "${FASTBPE}" ] ; then - git clone https://github.com/glample/fastBPE.git - # Follow compilation instructions at https://github.com/glample/fastBPE - g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast -fi - -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${src_bpe_code} -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${tgt_bpe_code} - -fairseq-preprocess -s $s -t $t \ - --testpref data_dir/bpe.$test.$s-$t \ - --destdir data_dir/binarized \ - --srcdict ${src_dict} \ - --tgtdict ${tgt_dict} -``` - -## Calculating BLEU - -```sh -DETOKENIZER=$SCRIPTS/tokenizer/detokenizer.perl -cat ${generation_output} | grep -P "^H" | sort -V | cut -f 3- | $DETOKENIZER -l $t -q -a | sacrebleu -t $test -l $s-$t -``` - - -## Romanian-English Translation - -The direct and channel models are trained using bitext data (WMT16) combined with backtranslated data (The monolingual data used for backtranslation comes from http://data.statmt.org/rsennrich/wmt16_backtranslations/ (Sennrich et al., 2016c)) - -The backtranslated data is generated using an ensemble of 3 English-Romanian models trained on bitext training data (WMT16) with unrestricted sampling. - -### BPE Codes and Dictionary - -We learn a joint BPE vocabulary of 18K types on the bitext training data which is used for both the source and target. -||Path| -|----------|------| -| BPE Code | [joint_bpe_18k](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/bpe_18k) | -| Dictionary | [dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/dict) | - -### Direct Models -For Ro-En with backtranslation, the direct and channel models use a Transformer-Big architecture. - -| Seed | Model | -|----|----| -| 2 | [ro_en_seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed2.pt) -| 4 | [ro_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed4.pt) -| 6 | [ro_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed6.pt) - -### Channel Models -For channel models, we follow the same steps as for the direct models. But backtranslated data is generated in the opposite direction using [this Romanian monolingual data](http://data.statmt.org/rsennrich/wmt16_backtranslations/). -The best lenpen, LM weight and CH weight are obtained by sweeping over the validation set (wmt16/dev) using beam 5. -| Model Size | Lenpen | LM Weight | CH Weight | Seed 2 | Seed 4 | Seed 6 | -|----|----|----|----|----|----|----| -| `big` | 0.84 | 0.64 | 0.56 | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | -| `base_1_1` | 0.63 | 0.40 | 0.37 | [base_1_1.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed2.pt) | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/lm_dict) - -## German-English Translation - -### BPE Codes and Dictionaries - -| | Path| -|----------|------| -| Source BPE Code | [de_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_bpe_code_24K) | -| Target BPE Code | [en_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_bpe_code_24K) -| Source Dictionary | [de_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_dict) | -| Target Dictionary | [en_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_dict) | - -### Direct Models -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. -We use the Transformer-Big architecture for the direct model. - -| Seed | Model | -|:----:|----| -| 4 | [de_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt) -| 5 | [de_en_seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed5.pt) -| 6 | [de_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed6.pt) - -### Channel Models - -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. - -| Model Size | Seed 4 | Seed 5 | Seed 6 | -|----|----|----|----| -| `big` | [big.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt) | [big.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed5.pt) | [big.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed6.pt) | -| `big_1_1` | [big_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed4.pt) | [big_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed5.pt) | [big_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed6.pt) | -| `base` | [base.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed4.pt) | [base.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed5.pt) | [base.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed6.pt) | -| `base_1_1` | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed5.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed6.pt) | -| `half` | [half.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed4.pt) | [half.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed5.pt) | [half.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed6.pt) | -| `half_1_1` | [half_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed4.pt) | [half_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed5.pt) | [half_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed6.pt) | -| `quarter` | [quarter.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed4.pt) | [quarter.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed5.pt) | [quarter.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed6.pt) | -| `quarter_1_1` | [quarter_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed4.pt) | [quarter_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed5.pt) | [quarter_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed6.pt) | -| `8th` | [8th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed4.pt) | [8th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed5.pt) | [8th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed6.pt) | -| `8th_1_1` | [8th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed4.pt) | [8th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed5.pt) | [8th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed6.pt) | -| `16th` | [16th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed4.pt) | [16th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed5.pt) | [16th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed6.pt) | -| `16th_1_1` | [16th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed4.pt) | [16th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed5.pt) | [16th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/) - - -## Citation - -```bibtex -@inproceedings{bhosale2020language, - title={Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling}, - author={Shruti Bhosale and Kyra Yee and Sergey Edunov and Michael Auli}, - booktitle={Proceedings of the Fifth Conference on Machine Translation (WMT)}, - year={2020}, -} - -@inproceedings{yee2019simple, - title={Simple and Effective Noisy Channel Modeling for Neural Machine Translation}, - author={Yee, Kyra and Dauphin, Yann and Auli, Michael}, - booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, - pages={5700--5705}, - year={2019} -} -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py deleted file mode 100644 index cf08d1fe4b470477b724aa8d770d91c0cac35a0e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Tuple - - -def get_audio_files(manifest_path: str) -> Tuple[str, List[str], List[int]]: - fnames, sizes = [], [] - with open(manifest_path, "r") as f: - root_dir = f.readline().strip() - for line in f: - items = line.strip().split("\t") - assert ( - len(items) == 2 - ), f"File must have two columns separated by tab. Got {line}" - fnames.append(items[0]) - sizes.append(int(items[1])) - return root_dir, fnames, sizes diff --git a/spaces/HiepPhuocSS/TimeSFormer/utils/frame_rate.py b/spaces/HiepPhuocSS/TimeSFormer/utils/frame_rate.py deleted file mode 100644 index 37de8d4d17e5eed5505033e87bd50287dab1073c..0000000000000000000000000000000000000000 --- a/spaces/HiepPhuocSS/TimeSFormer/utils/frame_rate.py +++ /dev/null @@ -1,55 +0,0 @@ -import time -from typing import Optional - -import cv2 -import numpy as np -from PIL import Image, ImageDraw, ImageFont - - -class FrameRate: - def __init__(self) -> None: - self.c: int = 0 - self.start_time: Optional[float] = None - self.NO_FRAMES = 10 - self.fps: float = -1 - self.label: str = "" - self.font = ImageFont.truetype("fonts/arial.ttf", 50) - self.reset() - - def reset(self) -> None: - self.start_time = time.time() - self.c = 0 - self.fps = -1 - - def count(self) -> None: - self.c += 1 - if self.c % self.NO_FRAMES == 0: - self.c = 0 - end_time = time.time() - self.fps = self.NO_FRAMES / (end_time - self.start_time) - self.start_time = end_time - - def show_fps(self, image: np.ndarray, is_recording=False) -> np.ndarray: - if self.fps != -1: - text = f"FPS {self.fps:.0f} _ {self.label}" - # image = cv2.putText( - # image, - # text, - # (50, 50), - # cv2.FONT_HERSHEY_SIMPLEX, - # fontScale=1, - # color=(255, 0, 0), - # thickness=2, - # ) - pil_image = Image.fromarray(image) - draw = ImageDraw.Draw(pil_image) - draw.text((50, 50), text, font=self.font, fill=(0, 0, 204)) - image = np.asarray(pil_image) - - if is_recording: - image = cv2.circle( - image, (50, 100), radius=10, color=(0, 0, 255), thickness=-1 - ) - return image - else: - return image diff --git a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/tests/test_text_models.py b/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/tests/test_text_models.py deleted file mode 100644 index 127adfa6337333ba5ae598fcd158956def0d520f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/tests/test_text_models.py +++ /dev/null @@ -1,407 +0,0 @@ -import argparse -import unittest -from typing import Any, Dict - -import torch -from examples.simultaneous_translation.models import ( - transformer_monotonic_attention -) - - -from tests.test_roberta import FakeTask - - -DEFAULT_CONFIG = { - "attention_eps": 1e-6, - "mass_preservation": True, - "noise_type": "flat", - "noise_mean": 0.0, - "noise_var": 1.0, - "energy_bias_init": -2, - "energy_bias": True -} - - -PAD_INDEX = 1 - - -def generate_config(overrides_kv): - new_dict = {key: value for key, value in DEFAULT_CONFIG.items()} - for key, value in overrides_kv.items(): - new_dict[key] = value - return new_dict - - -def make_sample_with_padding(longer_src=False) -> Dict[str, Any]: - tokens_1 = torch.LongTensor( - [ - [2, 10, 11, 12, 13, 14, 15, 10, 11, 12, 13, 14, 15, 2], - [ - 2, 11, 12, 14, 15, 10, 11, 12, 13, 14, 15, 2, - PAD_INDEX, PAD_INDEX - ], - ] - ) - tokens_2 = torch.LongTensor( - [ - [2, 11, 12, 13, 14, 2, PAD_INDEX, PAD_INDEX], - [2, 11, 22, 33, 2, PAD_INDEX, PAD_INDEX, PAD_INDEX] - ] - ) - if longer_src: - src_tokens = tokens_1[:, 1:] - prev_output_tokens = tokens_2 - else: - src_tokens = tokens_2[:, 1:8] - prev_output_tokens = tokens_1 - - src_lengths = src_tokens.ne(PAD_INDEX).sum(dim=1).long() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "prev_output_tokens": prev_output_tokens, - "src_lengths": src_lengths, - }, - "target": prev_output_tokens[:, 1:], - } - return sample - - -def build_transformer_monotonic_attention(**extra_args: Any): - overrides = { - # Use characteristics dimensions - "encoder_embed_dim": 12, - "encoder_ffn_embed_dim": 14, - "decoder_embed_dim": 12, - "decoder_ffn_embed_dim": 14, - # Disable dropout so we have comparable tests. - "dropout": 0, - "attention_dropout": 0, - "activation_dropout": 0, - "encoder_layerdrop": 0, - } - overrides.update(extra_args) - # Overrides the defaults from the parser - args = argparse.Namespace(**overrides) - transformer_monotonic_attention.monotonic_tiny_architecture(args) - - torch.manual_seed(0) - task = FakeTask(args) - return ( - transformer_monotonic_attention - .TransformerModelSimulTrans - .build_model(args, task) - ) - - -def expected_alignment_formula( - p_choose, - mass_perservation=True, - padding_mask=None -): - # Online and Linear-Time Attention by Enforcing Monotonic Alignments - # https://arxiv.org/pdf/1704.00784.pdf - # Eq 18, 19 - bsz, tgt_len, src_len = p_choose.size() - alpha = torch.zeros_like(p_choose) - - if padding_mask is not None: - bsz_pad = padding_mask.size(0) - num_heads = int(bsz / bsz_pad) - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz_pad, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - - p_choose = p_choose.masked_fill(padding_mask.unsqueeze(1), 0) - - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - if i == 0: - if j == 0: - # First source token - alpha[bsz_i, i, j] = p_choose[bsz_i, i, j] - else: - # First target token - alpha[bsz_i, i, j] = ( - p_choose[bsz_i, i, j] - * torch.prod( - 1 - p_choose[bsz_i, i, :j] - ) - ) - else: - alpha[bsz_i, i, j] = alpha[bsz_i, i - 1, j] - for k in range(j): - alpha[bsz_i, i, j] += ( - alpha[bsz_i, i - 1, k] - * torch.prod( - 1 - p_choose[bsz_i, i, k:j] - ) - ) - alpha[bsz_i, i, j] *= p_choose[bsz_i, i, j] - - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0) - - if mass_perservation: - alpha = mass_perservation_formula(alpha, False, padding_mask) - - return alpha - - -def mass_perservation_formula(alpha, left_padding=False, padding_mask=None): - if padding_mask is None or alpha.size(-1) == 1: - if alpha.size(-1) > 1: - alpha[:, :, -1] = 1 - alpha[:, :, :-1].sum(dim=-1) - return alpha - - src_lens = (padding_mask.logical_not()).sum(dim=1).long() - - bsz, tgt_len, src_len = alpha.size() - - assert ( - not left_padding - or (left_padding and (not padding_mask[:, 0].any())) - ) - - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0) - - for bsz_i in range(bsz): - if left_padding: - alpha[bsz_i, :, -1] = ( - 1 - alpha[bsz_i, :, :-1].sum(dim=-1) - ) - else: - alpha[bsz_i, :, src_lens[bsz_i] - 1] = ( - 1 - alpha[bsz_i, :, :src_lens[bsz_i] - 1].sum(dim=-1) - ) - - return alpha - - -def expected_soft_attention_formula( - alpha, - soft_energy, - padding_mask=None, - chunksize=1e10, -): - # Monotonic Infinite Lookback Attention for Simultaneous Machine Translation - # https://arxiv.org/pdf/1906.05218.pdf - # Eq 14 - - # Monotonic Chunkwise Attention - # https://arxiv.org/abs/1712.05382 - # Eq 17 - bsz, tgt_len, src_len = alpha.size() - beta = torch.zeros_like(alpha) - - if padding_mask is not None: - bsz_pad = padding_mask.size(0) - num_heads = int(bsz / bsz_pad) - # Expanding for potential head dimension - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz_pad, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - soft_energy = soft_energy.masked_fill(padding_mask.unsqueeze(1), float('-inf')) - - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - for k in range(j, min([src_len, j + chunksize])): - if not padding_mask[bsz_i, j]: - beta[bsz_i, i, j] += ( - alpha[bsz_i, i, k] * torch.exp(soft_energy[bsz_i, i, j]) - / torch.sum(torch.exp(soft_energy[bsz_i, i, max([0, k - chunksize + 1]):k + 1])) - ) - return beta - - -class MonotonicAttentionTestAbstractClass(object): - def test_forward(self): - sample = make_sample_with_padding() - out, _ = self.model.forward(**sample["net_input"]) - loss = out.sum() - loss.backward() - - def test_p_choose(self): - sample = make_sample_with_padding() - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - self.assertTrue(p_choose.le(1.0).all()) - self.assertTrue(p_choose.ge(0.0).all()) - - def test_expected_alignment(self): - for longer_src in [True, False]: - sample = make_sample_with_padding(longer_src) - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - alpha_system = item["alpha"] - self.assertTrue(p_choose.size() == alpha_system.size()) - bsz, num_head, tgt_len, src_len = alpha_system.size() - alpha_system = alpha_system.view(-1, tgt_len, src_len) - p_choose = p_choose.view(-1, tgt_len, src_len) - - alpha_real = expected_alignment_formula( - p_choose, - self.model.decoder.layers[0].encoder_attn.mass_preservation, - sample["net_input"]["src_tokens"].eq(PAD_INDEX) - ) - - self.assertTrue( - torch.abs(alpha_system - alpha_real).le(5e-5).all(), - ) - - -class HardMonotonicAttentionTestCase( - unittest.TestCase, - MonotonicAttentionTestAbstractClass -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config({"simul_type": "hard_aligned"}) - ) - - -class InfiniteLookbackTestCase( - unittest.TestCase, - MonotonicAttentionTestAbstractClass -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "infinite_lookback" - } - ) - ) - self.model.train() - - def test_fp16_for_long_input(self): - sample = { - "net_input": { - "src_tokens": torch.LongTensor([7] * 1000 + [2]).cuda().unsqueeze(0), - "prev_output_tokens": torch.LongTensor([7] * 1000 + [2]).cuda().unsqueeze(0), - "src_lengths": torch.LongTensor([1000]).cuda(), - }, - "target": torch.LongTensor([2] + [7] * 1000).unsqueeze(0).cuda() - } - self.model.cuda().half() - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - for key in ["p_choose", "alpha", "beta", "soft_energy"]: - self.assertFalse(torch.isnan(item[key]).any()) - - def test_expected_attention(self): - for longer_src in [True, False]: - sample = make_sample_with_padding(longer_src) - _, extra_out = self.model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - alpha_system = item["alpha"] - beta_system = item["beta"] - soft_energy_system = item["soft_energy"] - self.assertTrue(beta_system.size() == alpha_system.size()) - self.assertTrue(p_choose.size() == alpha_system.size()) - - bsz, num_head, tgt_len, src_len = alpha_system.size() - - alpha_system = alpha_system.view(-1, tgt_len, src_len) - beta_system = beta_system.view(-1, tgt_len, src_len) - p_choose = p_choose.view(-1, tgt_len, src_len) - soft_energy_system = soft_energy_system.view(-1, tgt_len, src_len) - - alpha_real = expected_alignment_formula( - p_choose, - self.model.decoder.layers[0].encoder_attn.mass_preservation, - sample["net_input"]["src_tokens"].eq(PAD_INDEX) - ) - - beta_real = expected_soft_attention_formula( - alpha_real, - soft_energy_system, - sample["net_input"]["src_tokens"].eq(PAD_INDEX), - chunksize=getattr( - self.model.decoder.layers[0].encoder_attn, - "chunk_size", - int(1e10) - ) - ) - - self.assertTrue( - torch.abs(beta_system - beta_real).le(1e-5).all(), - ) - - -class ChunkwiswTestCase( - InfiniteLookbackTestCase -): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "chunkwise", - "mocha_chunk_size": 3 - } - ) - ) - - -class WaitkTestCase(InfiniteLookbackTestCase): - def setUp(self): - self.model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "waitk", - "waitk_lagging": 3, - } - ) - ) - - def check_waitk(self, p_choose, lagging, padding_mask): - bsz, tgt_len, src_len = p_choose.size() - for bsz_i in range(bsz): - for i in range(tgt_len): - for j in range(src_len): - if not padding_mask[bsz_i, j]: - if j - i == lagging - 1: - self.assertTrue(p_choose[bsz_i, i, j] == 1) - else: - self.assertTrue(p_choose[bsz_i, i, j] == 0) - - def test_waitk_p_choose(self): - for longer_src in [True, False]: - for k in [1, 3, 10, 20, 100]: - sample = make_sample_with_padding(longer_src) - model = build_transformer_monotonic_attention( - **generate_config( - { - "simul_type": "waitk", - "waitk_lagging": k, - } - ) - ) - model.train() - _, extra_out = model.forward(**sample["net_input"]) - for item in extra_out.attn_list: - p_choose = item["p_choose"] - bsz, num_heads, tgt_len, src_len = p_choose.size() - padding_mask = sample["net_input"]["src_tokens"].eq(PAD_INDEX) - padding_mask = ( - padding_mask - .unsqueeze(1) - .expand([bsz, num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - p_choose = p_choose.view(bsz * num_heads, tgt_len, src_len) - self.check_waitk(p_choose, k, padding_mask) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/character_token_embedder.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/character_token_embedder.py deleted file mode 100644 index 181221b61b9f76453b67e3b848b198620dce912c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/character_token_embedder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Tuple - -import torch -import torch.nn.functional as F -from fairseq.data import Dictionary -from torch import nn - - -CHAR_PAD_IDX = 0 -CHAR_EOS_IDX = 257 - - -logger = logging.getLogger(__name__) - - -class CharacterTokenEmbedder(torch.nn.Module): - def __init__( - self, - vocab: Dictionary, - filters: List[Tuple[int, int]], - char_embed_dim: int, - word_embed_dim: int, - highway_layers: int, - max_char_len: int = 50, - char_inputs: bool = False, - ): - super(CharacterTokenEmbedder, self).__init__() - - self.onnx_trace = False - self.embedding_dim = word_embed_dim - self.max_char_len = max_char_len - self.char_embeddings = nn.Embedding(257, char_embed_dim, padding_idx=0) - self.symbol_embeddings = nn.Parameter(torch.FloatTensor(2, word_embed_dim)) - self.eos_idx, self.unk_idx = 0, 1 - self.char_inputs = char_inputs - - self.convolutions = nn.ModuleList() - for width, out_c in filters: - self.convolutions.append( - nn.Conv1d(char_embed_dim, out_c, kernel_size=width) - ) - - last_dim = sum(f[1] for f in filters) - - self.highway = Highway(last_dim, highway_layers) if highway_layers > 0 else None - - self.projection = nn.Linear(last_dim, word_embed_dim) - - assert ( - vocab is not None or char_inputs - ), "vocab must be set if not using char inputs" - self.vocab = None - if vocab is not None: - self.set_vocab(vocab, max_char_len) - - self.reset_parameters() - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def set_vocab(self, vocab, max_char_len): - word_to_char = torch.LongTensor(len(vocab), max_char_len) - - truncated = 0 - for i in range(len(vocab)): - if i < vocab.nspecial: - char_idxs = [0] * max_char_len - else: - chars = vocab[i].encode() - # +1 for padding - char_idxs = [c + 1 for c in chars] + [0] * (max_char_len - len(chars)) - if len(char_idxs) > max_char_len: - truncated += 1 - char_idxs = char_idxs[:max_char_len] - word_to_char[i] = torch.LongTensor(char_idxs) - - if truncated > 0: - logger.info( - "truncated {} words longer than {} characters".format( - truncated, max_char_len - ) - ) - - self.vocab = vocab - self.word_to_char = word_to_char - - @property - def padding_idx(self): - return Dictionary().pad() if self.vocab is None else self.vocab.pad() - - def reset_parameters(self): - nn.init.xavier_normal_(self.char_embeddings.weight) - nn.init.xavier_normal_(self.symbol_embeddings) - nn.init.xavier_uniform_(self.projection.weight) - - nn.init.constant_( - self.char_embeddings.weight[self.char_embeddings.padding_idx], 0.0 - ) - nn.init.constant_(self.projection.bias, 0.0) - - def forward( - self, - input: torch.Tensor, - ): - if self.char_inputs: - chars = input.view(-1, self.max_char_len) - pads = chars[:, 0].eq(CHAR_PAD_IDX) - eos = chars[:, 0].eq(CHAR_EOS_IDX) - if eos.any(): - if self.onnx_trace: - chars = torch.where(eos.unsqueeze(1), chars.new_zeros(1), chars) - else: - chars[eos] = 0 - - unk = None - else: - flat_words = input.view(-1) - chars = self.word_to_char[flat_words.type_as(self.word_to_char)].type_as( - input - ) - pads = flat_words.eq(self.vocab.pad()) - eos = flat_words.eq(self.vocab.eos()) - unk = flat_words.eq(self.vocab.unk()) - - word_embs = self._convolve(chars) - if self.onnx_trace: - if pads.any(): - word_embs = torch.where( - pads.unsqueeze(1), word_embs.new_zeros(1), word_embs - ) - if eos.any(): - word_embs = torch.where( - eos.unsqueeze(1), self.symbol_embeddings[self.eos_idx], word_embs - ) - if unk is not None and unk.any(): - word_embs = torch.where( - unk.unsqueeze(1), self.symbol_embeddings[self.unk_idx], word_embs - ) - else: - if pads.any(): - word_embs[pads] = 0 - if eos.any(): - word_embs[eos] = self.symbol_embeddings[self.eos_idx] - if unk is not None and unk.any(): - word_embs[unk] = self.symbol_embeddings[self.unk_idx] - - return word_embs.view(input.size()[:2] + (-1,)) - - def _convolve( - self, - char_idxs: torch.Tensor, - ): - char_embs = self.char_embeddings(char_idxs) - char_embs = char_embs.transpose(1, 2) # BTC -> BCT - - conv_result = [] - - for conv in self.convolutions: - x = conv(char_embs) - x, _ = torch.max(x, -1) - x = F.relu(x) - conv_result.append(x) - - x = torch.cat(conv_result, dim=-1) - - if self.highway is not None: - x = self.highway(x) - x = self.projection(x) - - return x - - -class Highway(torch.nn.Module): - """ - A `Highway layer `_. - Adopted from the AllenNLP implementation. - """ - - def __init__(self, input_dim: int, num_layers: int = 1): - super(Highway, self).__init__() - self.input_dim = input_dim - self.layers = nn.ModuleList( - [nn.Linear(input_dim, input_dim * 2) for _ in range(num_layers)] - ) - self.activation = nn.ReLU() - - self.reset_parameters() - - def reset_parameters(self): - for layer in self.layers: - # As per comment in AllenNLP: - # We should bias the highway layer to just carry its input forward. We do that by - # setting the bias on `B(x)` to be positive, because that means `g` will be biased to - # be high, so we will carry the input forward. The bias on `B(x)` is the second half - # of the bias vector in each Linear layer. - nn.init.constant_(layer.bias[self.input_dim :], 1) - - nn.init.constant_(layer.bias[: self.input_dim], 0) - nn.init.xavier_normal_(layer.weight) - - def forward(self, x: torch.Tensor): - for layer in self.layers: - projection = layer(x) - proj_x, gate = projection.chunk(2, dim=-1) - proj_x = self.activation(proj_x) - gate = torch.sigmoid(gate) - x = gate * x + (gate.new_tensor([1]) - gate) * proj_x - return x diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/colors.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/colors.py deleted file mode 100644 index 9e9e39182c58cb06a1c5e97a7e6c497cc3388ebe..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/colors.py +++ /dev/null @@ -1,76 +0,0 @@ -import random -import colorsys - -import numpy as np -import matplotlib -matplotlib.use('agg') -import matplotlib.pyplot as plt -from matplotlib.colors import LinearSegmentedColormap - - -def generate_colors(nlabels, type='bright', first_color_black=False, last_color_black=True, verbose=False): - # https://stackoverflow.com/questions/14720331/how-to-generate-random-colors-in-matplotlib - """ - Creates a random colormap to be used together with matplotlib. Useful for segmentation tasks - :param nlabels: Number of labels (size of colormap) - :param type: 'bright' for strong colors, 'soft' for pastel colors - :param first_color_black: Option to use first color as black, True or False - :param last_color_black: Option to use last color as black, True or False - :param verbose: Prints the number of labels and shows the colormap. True or False - :return: colormap for matplotlib - """ - if type not in ('bright', 'soft'): - print ('Please choose "bright" or "soft" for type') - return - - if verbose: - print('Number of labels: ' + str(nlabels)) - - # Generate color map for bright colors, based on hsv - if type == 'bright': - randHSVcolors = [(np.random.uniform(low=0.0, high=1), - np.random.uniform(low=0.2, high=1), - np.random.uniform(low=0.9, high=1)) for i in range(nlabels)] - - # Convert HSV list to RGB - randRGBcolors = [] - for HSVcolor in randHSVcolors: - randRGBcolors.append(colorsys.hsv_to_rgb(HSVcolor[0], HSVcolor[1], HSVcolor[2])) - - if first_color_black: - randRGBcolors[0] = [0, 0, 0] - - if last_color_black: - randRGBcolors[-1] = [0, 0, 0] - - random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels) - - # Generate soft pastel colors, by limiting the RGB spectrum - if type == 'soft': - low = 0.6 - high = 0.95 - randRGBcolors = [(np.random.uniform(low=low, high=high), - np.random.uniform(low=low, high=high), - np.random.uniform(low=low, high=high)) for i in range(nlabels)] - - if first_color_black: - randRGBcolors[0] = [0, 0, 0] - - if last_color_black: - randRGBcolors[-1] = [0, 0, 0] - random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels) - - # Display colorbar - if verbose: - from matplotlib import colors, colorbar - from matplotlib import pyplot as plt - fig, ax = plt.subplots(1, 1, figsize=(15, 0.5)) - - bounds = np.linspace(0, nlabels, nlabels + 1) - norm = colors.BoundaryNorm(bounds, nlabels) - - cb = colorbar.ColorbarBase(ax, cmap=random_colormap, norm=norm, spacing='proportional', ticks=None, - boundaries=bounds, format='%1i', orientation=u'horizontal') - - return randRGBcolors, random_colormap - diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/train_mixed.sh b/spaces/JUNGU/VToonify/vtoonify/model/raft/train_mixed.sh deleted file mode 100644 index d9b979f143902a17a0ba7b0a8f960598b7096e0b..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/raft/train_mixed.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 --num_steps 120000 --batch_size 8 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 --mixed_precision -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 --num_steps 50000 --batch_size 5 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 --mixed_precision diff --git a/spaces/Jamkonams/AutoGPT/autogpt/commands/image_gen.py b/spaces/Jamkonams/AutoGPT/autogpt/commands/image_gen.py deleted file mode 100644 index 0809fcdd3e38b52a2ce09ca1444f2574813d40f9..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/commands/image_gen.py +++ /dev/null @@ -1,163 +0,0 @@ -""" Image Generation Module for AutoGPT.""" -import io -import os.path -import uuid -from base64 import b64decode - -import openai -import requests -from PIL import Image - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -CFG = Config() - - -def generate_image(prompt: str, size: int = 256) -> str: - """Generate an image from a prompt. - - Args: - prompt (str): The prompt to use - size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace) - - Returns: - str: The filename of the image - """ - filename = f"{str(uuid.uuid4())}.jpg" - - # DALL-E - if CFG.image_provider == "dalle": - return generate_image_with_dalle(prompt, filename, size) - # HuggingFace - elif CFG.image_provider == "huggingface": - return generate_image_with_hf(prompt, filename) - # SD WebUI - elif CFG.image_provider == "sdwebui": - return generate_image_with_sd_webui(prompt, filename, size) - return "No Image Provider Set" - - -def generate_image_with_hf(prompt: str, filename: str) -> str: - """Generate an image with HuggingFace's API. - - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - - Returns: - str: The filename of the image - """ - API_URL = ( - f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}" - ) - if CFG.huggingface_api_token is None: - raise ValueError( - "You need to set your Hugging Face API token in the config file." - ) - headers = { - "Authorization": f"Bearer {CFG.huggingface_api_token}", - "X-Use-Cache": "false", - } - - response = requests.post( - API_URL, - headers=headers, - json={ - "inputs": prompt, - }, - ) - - image = Image.open(io.BytesIO(response.content)) - print(f"Image Generated for prompt:{prompt}") - - image.save(path_in_workspace(filename)) - - return f"Saved to disk:{filename}" - - -def generate_image_with_dalle(prompt: str, filename: str) -> str: - """Generate an image with DALL-E. - - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - - Returns: - str: The filename of the image - """ - openai.api_key = CFG.openai_api_key - - # Check for supported image sizes - if size not in [256, 512, 1024]: - closest = min([256, 512, 1024], key=lambda x: abs(x - size)) - print( - f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}." - ) - size = closest - - response = openai.Image.create( - prompt=prompt, - n=1, - size=f"{size}x{size}", - response_format="b64_json", - ) - - print(f"Image Generated for prompt:{prompt}") - - image_data = b64decode(response["data"][0]["b64_json"]) - - with open(path_in_workspace(filename), mode="wb") as png: - png.write(image_data) - - return f"Saved to disk:{filename}" - - -def generate_image_with_sd_webui( - prompt: str, - filename: str, - size: int = 512, - negative_prompt: str = "", - extra: dict = {}, -) -> str: - """Generate an image with Stable Diffusion webui. - Args: - prompt (str): The prompt to use - filename (str): The filename to save the image to - size (int, optional): The size of the image. Defaults to 256. - negative_prompt (str, optional): The negative prompt to use. Defaults to "". - extra (dict, optional): Extra parameters to pass to the API. Defaults to {}. - Returns: - str: The filename of the image - """ - # Create a session and set the basic auth if needed - s = requests.Session() - if CFG.sd_webui_auth: - username, password = CFG.sd_webui_auth.split(":") - s.auth = (username, password or "") - - # Generate the images - response = requests.post( - f"{CFG.sd_webui_url}/sdapi/v1/txt2img", - json={ - "prompt": prompt, - "negative_prompt": negative_prompt, - "sampler_index": "DDIM", - "steps": 20, - "cfg_scale": 7.0, - "width": size, - "height": size, - "n_iter": 1, - **extra, - }, - ) - - print(f"Image Generated for prompt:{prompt}") - - # Save the image to disk - response = response.json() - b64 = b64decode(response["images"][0].split(",", 1)[0]) - image = Image.open(io.BytesIO(b64)) - image.save(path_in_workspace(filename)) - - return f"Saved to disk:{filename}" diff --git a/spaces/JonaSosa/spam_filter/README.md b/spaces/JonaSosa/spam_filter/README.md deleted file mode 100644 index 2e96de5dfeb14306c51324a398c342f3262aa6ab..0000000000000000000000000000000000000000 --- a/spaces/JonaSosa/spam_filter/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Spam Filter -emoji: 👀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/model_components/simple_module.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/model_components/simple_module.py deleted file mode 100644 index 8234efb360fb32d08dd1cce87649b5eeb5b1603e..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/model_components/simple_module.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -from salad.model_components.transformer import TimeMLP - - -class TimePointwiseLayer(nn.Module): - def __init__( - self, - dim_in, - dim_ctx, - mlp_ratio=2, - act=F.leaky_relu, - dropout=0.0, - use_time=False, - ): - super().__init__() - self.use_time = use_time - self.act = act - self.mlp1 = TimeMLP( - dim_in, dim_in * mlp_ratio, dim_in, dim_ctx, use_time=use_time - ) - self.norm1 = nn.LayerNorm(dim_in) - - self.mlp2 = TimeMLP( - dim_in, dim_in * mlp_ratio, dim_in, dim_ctx, use_time=use_time - ) - self.norm2 = nn.LayerNorm(dim_in) - self.dropout = nn.Dropout(dropout) - - def forward(self, x, ctx=None): - res = x - x = self.mlp1(x, ctx=ctx) - x = self.norm1(x + res) - - res = x - x = self.mlp2(x, ctx=ctx) - x = self.norm2(x + res) - return x - - -class TimePointWiseEncoder(nn.Module): - def __init__( - self, - dim_in, - dim_ctx=None, - mlp_ratio=2, - act=F.leaky_relu, - dropout=0.0, - use_time=True, - num_layers=6, - last_fc=False, - last_fc_dim_out=None, - ): - super().__init__() - self.last_fc = last_fc - if last_fc: - self.fc = nn.Linear(dim_in, last_fc_dim_out) - self.layers = nn.ModuleList( - [ - TimePointwiseLayer( - dim_in, - dim_ctx=dim_ctx, - mlp_ratio=mlp_ratio, - act=act, - dropout=dropout, - use_time=use_time, - ) - for _ in range(num_layers) - ] - ) - - def forward(self, x, ctx=None): - for i, layer in enumerate(self.layers): - x = layer(x, ctx=ctx) - if self.last_fc: - x = self.fc(x) - return x - - -class TimestepEmbedder(nn.Module): - """ - Embeds scalar timesteps into vector representations. - """ - - def __init__(self, hidden_size, frequency_embedding_size=256): - super().__init__() - self.mlp = nn.Sequential( - nn.Linear(frequency_embedding_size, hidden_size, bias=True), - nn.SiLU(), - nn.Linear(hidden_size, hidden_size, bias=True), - ) - self.frequency_embedding_size = frequency_embedding_size - - @staticmethod - def timestep_embedding(t, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - :param t: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an (N, D) Tensor of positional embeddings. - """ - # https://github.com/openai/glide-text2im/blob/main/glide_text2im/nn.py - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) - * torch.arange(start=0, end=half, dtype=torch.float32) - / half - ).to(device=t.device) - args = t[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat( - [embedding, torch.zeros_like(embedding[:, :1])], dim=-1 - ) - return embedding - - def forward(self, t): - t_freq = self.timestep_embedding(t, self.frequency_embedding_size) - t_emb = self.mlp(t_freq) - return t_emb diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/vc/__init__.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/vc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/__init__.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/synthesizer_dataset.py b/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/synthesizer_dataset.py deleted file mode 100644 index 9d552d16d0b6757871189037bf0b981c8dfebbaf..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/synthesizer_dataset.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.utils.data import Dataset -import numpy as np -from pathlib import Path -from synthesizer.utils.text import text_to_sequence - - -class SynthesizerDataset(Dataset): - def __init__(self, metadata_fpath: Path, mel_dir: Path, embed_dir: Path, hparams): - print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, embed_dir)) - - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - - mel_fnames = [x[1] for x in metadata if int(x[4])] - mel_fpaths = [mel_dir.joinpath(fname) for fname in mel_fnames] - embed_fnames = [x[2] for x in metadata if int(x[4])] - embed_fpaths = [embed_dir.joinpath(fname) for fname in embed_fnames] - self.samples_fpaths = list(zip(mel_fpaths, embed_fpaths)) - self.samples_texts = [x[5].strip() for x in metadata if int(x[4])] - self.metadata = metadata - self.hparams = hparams - - print("Found %d samples" % len(self.samples_fpaths)) - - def __getitem__(self, index): - # Sometimes index may be a list of 2 (not sure why this happens) - # If that is the case, return a single item corresponding to first element in index - if index is list: - index = index[0] - - mel_path, embed_path = self.samples_fpaths[index] - mel = np.load(mel_path).T.astype(np.float32) - - # Load the embed - embed = np.load(embed_path) - - # Get the text and clean it - text = text_to_sequence(self.samples_texts[index], self.hparams.tts_cleaner_names) - - # Convert the list returned by text_to_sequence to a numpy array - text = np.asarray(text).astype(np.int32) - - return text, mel.astype(np.float32), embed.astype(np.float32), index - - def __len__(self): - return len(self.samples_fpaths) - - -def collate_synthesizer(batch, r, hparams): - # Text - x_lens = [len(x[0]) for x in batch] - max_x_len = max(x_lens) - - chars = [pad1d(x[0], max_x_len) for x in batch] - chars = np.stack(chars) - - # Mel spectrogram - spec_lens = [x[1].shape[-1] for x in batch] - max_spec_len = max(spec_lens) + 1 - if max_spec_len % r != 0: - max_spec_len += r - max_spec_len % r - - # WaveRNN mel spectrograms are normalized to [0, 1] so zero padding adds silence - # By default, SV2TTS uses symmetric mels, where -1*max_abs_value is silence. - if hparams.symmetric_mels: - mel_pad_value = -1 * hparams.max_abs_value - else: - mel_pad_value = 0 - - mel = [pad2d(x[1], max_spec_len, pad_value=mel_pad_value) for x in batch] - mel = np.stack(mel) - - # Speaker embedding (SV2TTS) - embeds = [x[2] for x in batch] - - # Index (for vocoder preprocessing) - indices = [x[3] for x in batch] - - - # Convert all to tensor - chars = torch.tensor(chars).long() - mel = torch.tensor(mel) - embeds = torch.tensor(embeds) - - return chars, mel, embeds, indices - -def pad1d(x, max_len, pad_value=0): - return np.pad(x, (0, max_len - len(x)), mode="constant", constant_values=pad_value) - -def pad2d(x, max_len, pad_value=0): - return np.pad(x, ((0, 0), (0, max_len - x.shape[-1])), mode="constant", constant_values=pad_value) diff --git a/spaces/Kotinagendla/MyGenAIChatBot/README.md b/spaces/Kotinagendla/MyGenAIChatBot/README.md deleted file mode 100644 index a2f62b401ec8dde0dc0f2b871c8cfbfbe64179f2..0000000000000000000000000000000000000000 --- a/spaces/Kotinagendla/MyGenAIChatBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAIChatBot -emoji: 🐢 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmdet/utils/split_batch.py b/spaces/KyanChen/RSPrompter/mmdet/utils/split_batch.py deleted file mode 100644 index 0276fb331f23c1a7f7451faf2a8f768e616d45fd..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/utils/split_batch.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def split_batch(img, img_metas, kwargs): - """Split data_batch by tags. - - Code is modified from - # noqa: E501 - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys, see - :class:`mmdet.datasets.pipelines.Collect`. - kwargs (dict): Specific to concrete implementation. - - Returns: - data_groups (dict): a dict that data_batch splited by tags, - such as 'sup', 'unsup_teacher', and 'unsup_student'. - """ - - # only stack img in the batch - def fuse_list(obj_list, obj): - return torch.stack(obj_list) if isinstance(obj, - torch.Tensor) else obj_list - - # select data with tag from data_batch - def select_group(data_batch, current_tag): - group_flag = [tag == current_tag for tag in data_batch['tag']] - return { - k: fuse_list([vv for vv, gf in zip(v, group_flag) if gf], v) - for k, v in data_batch.items() - } - - kwargs.update({'img': img, 'img_metas': img_metas}) - kwargs.update({'tag': [meta['tag'] for meta in img_metas]}) - tags = list(set(kwargs['tag'])) - data_groups = {tag: select_group(kwargs, tag) for tag in tags} - for tag, group in data_groups.items(): - group.pop('tag') - return data_groups diff --git a/spaces/Lamai/LAMAIGPT/autogpt/memory/redismem.py b/spaces/Lamai/LAMAIGPT/autogpt/memory/redismem.py deleted file mode 100644 index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/memory/redismem.py +++ /dev/null @@ -1,156 +0,0 @@ -"""Redis memory provider.""" -from __future__ import annotations - -from typing import Any - -import numpy as np -import redis -from colorama import Fore, Style -from redis.commands.search.field import TextField, VectorField -from redis.commands.search.indexDefinition import IndexDefinition, IndexType -from redis.commands.search.query import Query - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.logs import logger -from autogpt.memory.base import MemoryProviderSingleton - -SCHEMA = [ - TextField("data"), - VectorField( - "embedding", - "HNSW", - {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}, - ), -] - - -class RedisMemory(MemoryProviderSingleton): - def __init__(self, cfg): - """ - Initializes the Redis memory provider. - - Args: - cfg: The config object. - - Returns: None - """ - redis_host = cfg.redis_host - redis_port = cfg.redis_port - redis_password = cfg.redis_password - self.dimension = 1536 - self.redis = redis.Redis( - host=redis_host, - port=redis_port, - password=redis_password, - db=0, # Cannot be changed - ) - self.cfg = cfg - - # Check redis connection - try: - self.redis.ping() - except redis.ConnectionError as e: - logger.typewriter_log( - "FAILED TO CONNECT TO REDIS", - Fore.RED, - Style.BRIGHT + str(e) + Style.RESET_ALL, - ) - logger.double_check( - "Please ensure you have setup and configured Redis properly for use. " - + f"You can check out {Fore.CYAN + Style.BRIGHT}" - f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}" - " to ensure you've set up everything correctly." - ) - exit(1) - - if cfg.wipe_redis_on_start: - self.redis.flushall() - try: - self.redis.ft(f"{cfg.memory_index}").create_index( - fields=SCHEMA, - definition=IndexDefinition( - prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH - ), - ) - except Exception as e: - print("Error creating Redis search index: ", e) - existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num") - self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0 - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. - - Args: - data: The data to add. - - Returns: Message indicating that the data has been added. - """ - if "Command Error:" in data: - return "" - vector = create_embedding_with_ada(data) - vector = np.array(vector).astype(np.float32).tobytes() - data_dict = {b"data": data, "embedding": vector} - pipe = self.redis.pipeline() - pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict) - _text = ( - f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}" - ) - self.vec_num += 1 - pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num) - pipe.execute() - return _text - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.redis.flushall() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: A list of the most relevant data. - """ - query_embedding = create_embedding_with_ada(data) - base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]" - query = ( - Query(base_query) - .return_fields("data", "vector_score") - .sort_by("vector_score") - .dialect(2) - ) - query_vector = np.array(query_embedding).astype(np.float32).tobytes() - - try: - results = self.redis.ft(f"{self.cfg.memory_index}").search( - query, query_params={"vector": query_vector} - ) - except Exception as e: - print("Error calling Redis search: ", e) - return None - return [result.data for result in results.docs] - - def get_stats(self): - """ - Returns: The stats of the memory index. - """ - return self.redis.ft(f"{self.cfg.memory_index}").info() diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/i18n/extract_locale.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/i18n/extract_locale.py deleted file mode 100644 index a4ff5ea3ddd7c612c640544099ab98a861b8fe35..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/assets/i18n/extract_locale.py +++ /dev/null @@ -1,34 +0,0 @@ -import json -import re - -# Define regular expression patterns -pattern = r"""i18n\([\s\n\t]*(["'][^"']+["'])[\s\n\t]*\)""" - -# Initialize the dictionary to store key-value pairs -data = {} - - -def process(fn: str): - global data - with open(fn, "r", encoding="utf-8") as f: - contents = f.read() - matches = re.findall(pattern, contents) - for key in matches: - key = eval(key) - print("extract:", key) - data[key] = key - - -print("processing infer-web.py") -process("infer-web.py") - -print("processing gui_v0.py") -process("gui_v0.py") - -print("processing gui_v1.py") -process("gui_v1.py") - -# Save as a JSON file -with open("./i18n/en_US.json", "w", encoding="utf-8") as f: - json.dump(data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/Lbin123/Lbingo/src/pages/api/create.ts b/spaces/Lbin123/Lbingo/src/pages/api/create.ts deleted file mode 100644 index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/pages/api/create.ts +++ /dev/null @@ -1,31 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/Lianjd/stock_dashboard/backtrader/dataseries.py b/spaces/Lianjd/stock_dashboard/backtrader/dataseries.py deleted file mode 100644 index 25841bb10eb4d245d2a6d3a5a3a6d90e99a923dc..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/dataseries.py +++ /dev/null @@ -1,211 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import datetime as _datetime -from datetime import datetime -import inspect - -from .utils.py3 import range, with_metaclass -from .lineseries import LineSeries -from .utils import AutoOrderedDict, OrderedDict, date2num - - -class TimeFrame(object): - (Ticks, MicroSeconds, Seconds, Minutes, - Days, Weeks, Months, Years, NoTimeFrame) = range(1, 10) - - Names = ['', 'Ticks', 'MicroSeconds', 'Seconds', 'Minutes', - 'Days', 'Weeks', 'Months', 'Years', 'NoTimeFrame'] - - names = Names # support old naming convention - - @classmethod - def getname(cls, tframe, compression=None): - tname = cls.Names[tframe] - if compression > 1 or tname == cls.Names[-1]: - return tname # for plural or 'NoTimeFrame' return plain entry - - # return singular if compression is 1 - return cls.Names[tframe][:-1] - - @classmethod - def TFrame(cls, name): - return getattr(cls, name) - - @classmethod - def TName(cls, tframe): - return cls.Names[tframe] - - -class DataSeries(LineSeries): - plotinfo = dict(plot=True, plotind=True, plotylimited=True) - - _name = '' - _compression = 1 - _timeframe = TimeFrame.Days - - Close, Low, High, Open, Volume, OpenInterest, DateTime = range(7) - - LineOrder = [DateTime, Open, High, Low, Close, Volume, OpenInterest] - - def getwriterheaders(self): - headers = [self._name, 'len'] - - for lo in self.LineOrder: - headers.append(self._getlinealias(lo)) - - morelines = self.getlinealiases()[len(self.LineOrder):] - headers.extend(morelines) - - return headers - - def getwritervalues(self): - l = len(self) - values = [self._name, l] - - if l: - values.append(self.datetime.datetime(0)) - for line in self.LineOrder[1:]: - values.append(self.lines[line][0]) - for i in range(len(self.LineOrder), self.lines.size()): - values.append(self.lines[i][0]) - else: - values.extend([''] * self.lines.size()) # no values yet - - return values - - def getwriterinfo(self): - # returns dictionary with information - info = OrderedDict() - info['Name'] = self._name - info['Timeframe'] = TimeFrame.TName(self._timeframe) - info['Compression'] = self._compression - - return info - - -class OHLC(DataSeries): - lines = ('close', 'low', 'high', 'open', 'volume', 'openinterest',) - - -class OHLCDateTime(OHLC): - lines = (('datetime'),) - - -class SimpleFilterWrapper(object): - '''Wrapper for filters added via .addfilter to turn them - into processors. - - Filters are callables which - - - Take a ``data`` as an argument - - Return False if the current bar has not triggered the filter - - Return True if the current bar must be filtered - - The wrapper takes the return value and executes the bar removal - if needed be - ''' - def __init__(self, data, ffilter, *args, **kwargs): - if inspect.isclass(ffilter): - ffilter = ffilter(data, *args, **kwargs) - args = [] - kwargs = {} - - self.ffilter = ffilter - self.args = args - self.kwargs = kwargs - - def __call__(self, data): - if self.ffilter(data, *self.args, **self.kwargs): - data.backwards() - return True - - return False - - -class _Bar(AutoOrderedDict): - ''' - This class is a placeholder for the values of the standard lines of a - DataBase class (from OHLCDateTime) - - It inherits from AutoOrderedDict to be able to easily return the values as - an iterable and address the keys as attributes - - Order of definition is important and must match that of the lines - definition in DataBase (which directly inherits from OHLCDateTime) - ''' - replaying = False - - # Without - 1 ... converting back to time will not work - # Need another -1 to support timezones which may move the time forward - MAXDATE = date2num(_datetime.datetime.max) - 2 - - def __init__(self, maxdate=False): - super(_Bar, self).__init__() - self.bstart(maxdate=maxdate) - - def bstart(self, maxdate=False): - '''Initializes a bar to the default not-updated vaues''' - # Order is important: defined in DataSeries/OHLC/OHLCDateTime - self.close = float('NaN') - self.low = float('inf') - self.high = float('-inf') - self.open = float('NaN') - self.volume = 0.0 - self.openinterest = 0.0 - self.datetime = self.MAXDATE if maxdate else None - - def isopen(self): - '''Returns if a bar has already been updated - - Uses the fact that NaN is the value which is not equal to itself - and ``open`` is initialized to NaN - ''' - o = self.open - return o == o # False if NaN, True in other cases - - def bupdate(self, data, reopen=False): - '''Updates a bar with the values from data - - Returns True if the update was the 1st on a bar (just opened) - - Returns False otherwise - ''' - if reopen: - self.bstart() - - self.datetime = data.datetime[0] - - self.high = max(self.high, data.high[0]) - self.low = min(self.low, data.low[0]) - self.close = data.close[0] - - self.volume += data.volume[0] - self.openinterest = data.openinterest[0] - - o = self.open - if reopen or not o == o: - self.open = data.open[0] - return True # just opened the bar - - return False diff --git a/spaces/LucasCodeBreak/MusicGen/Makefile b/spaces/LucasCodeBreak/MusicGen/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/LuxOAI/ChatGpt-Web/app/api/wanjuan/route.ts b/spaces/LuxOAI/ChatGpt-Web/app/api/wanjuan/route.ts deleted file mode 100644 index f8b3bfb05bdef46e0fd644a264befc8531c6d89c..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/api/wanjuan/route.ts +++ /dev/null @@ -1,35 +0,0 @@ -export async function POST(req: Request) { - try { - let token = process.env.WANJUAN_TOKEN; - let body = { message: await req.json() }; - - console.log(JSON.stringify(body)); - let res = ""; - await fetch("http://47.94.237.159:8080/v1/wanjuan", { - method: "POST", - headers: { - Authorization: "Bearer " + token, - }, - body: JSON.stringify(body), - }) - .then((response) => response.json()) - .then((data) => { - // console.log(data) - if (data["statusInfo"]["code"] == 0) { - // console.log("123123") - res = data["data"]["msgContent"]; - } else { - res = data["statusInfo"]["message"]; - } - }) - .catch((err) => { - console.error("[WanJuan] ", err); - res = "出错了请重试!"; - }); - // console.log("12312"+res); - return new Response(res); - } catch (e) { - console.error("[WanJuan] ", e); - return new Response(JSON.stringify(e)); - } -} diff --git a/spaces/ML701G7/taim-gan/src/features/__init__.py b/spaces/ML701G7/taim-gan/src/features/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MadhuV28/Image_Background_Sidebar_Lottie_Animation/README.md b/spaces/MadhuV28/Image_Background_Sidebar_Lottie_Animation/README.md deleted file mode 100644 index ce88c2f148b98caffa218758b355c77f2205fbc6..0000000000000000000000000000000000000000 --- a/spaces/MadhuV28/Image_Background_Sidebar_Lottie_Animation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Background Sidebar Lottie Animation -emoji: 📈 -colorFrom: purple -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/default_runtime.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/default_runtime.py deleted file mode 100644 index b564cc4e7e7d9a67dacaaddecb100e4d8f5c005b..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/default_runtime.py +++ /dev/null @@ -1,14 +0,0 @@ -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook', by_epoch=False), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] -cudnn_benchmark = True diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/plugin.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/plugin.py deleted file mode 100644 index 07c010d4053174dd41107aa654ea67e82b46a25c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re -else: - import re - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - type (str): identify plugin layer type. - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: - name (str): abbreviation + postfix - layer (nn.Module): created plugin layer - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/spaces/Metal079/wd-v1-4-tags/README.md b/spaces/Metal079/wd-v1-4-tags/README.md deleted file mode 100644 index a553789ff8ab1de5a879c1bd9e6b4ef1b32e2a98..0000000000000000000000000000000000000000 --- a/spaces/Metal079/wd-v1-4-tags/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: WaifuDiffusion v1.4 Tags -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -duplicated_from: SmilingWolf/wd-v1-4-tags ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/options.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/options.py deleted file mode 100644 index 3c76097b6a0b2a312c161bd634a4a137b5929bf3..0000000000000000000000000000000000000000 --- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/options.py +++ /dev/null @@ -1,161 +0,0 @@ -import argparse -import os - - -class BaseOptions(): - def __init__(self): - self.initialized = False - argparse - def initialize(self, parser): - # Datasets related - g_data = parser.add_argument_group('Data') - g_data.add_argument('--dataroot', type=str, default='./data', - help='path to images (data folder)') - - g_data.add_argument('--loadSize', type=int, default=512, help='load size of input image') - - # Experiment related - g_exp = parser.add_argument_group('Experiment') - g_exp.add_argument('--name', type=str, default='example', - help='name of the experiment. It decides where to store samples and models') - g_exp.add_argument('--debug', action='store_true', help='debug mode or not') - - g_exp.add_argument('--num_views', type=int, default=1, help='How many views to use for multiview network.') - g_exp.add_argument('--random_multiview', action='store_true', help='Select random multiview combination.') - - # Training related - g_train = parser.add_argument_group('Training') - g_train.add_argument('--gpu_id', type=int, default=0, help='gpu id for cuda') - g_train.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2, -1 for CPU mode') - - g_train.add_argument('--num_threads', default=1, type=int, help='# sthreads for loading data') - g_train.add_argument('--serial_batches', action='store_true', - help='if true, takes images in order to make batches, otherwise takes them randomly') - g_train.add_argument('--pin_memory', action='store_true', help='pin_memory') - - g_train.add_argument('--batch_size', type=int, default=2, help='input batch size') - g_train.add_argument('--learning_rate', type=float, default=1e-3, help='adam learning rate') - g_train.add_argument('--learning_rateC', type=float, default=1e-3, help='adam learning rate') - g_train.add_argument('--num_epoch', type=int, default=100, help='num epoch to train') - - g_train.add_argument('--freq_plot', type=int, default=10, help='freqency of the error plot') - g_train.add_argument('--freq_save', type=int, default=50, help='freqency of the save_checkpoints') - g_train.add_argument('--freq_save_ply', type=int, default=100, help='freqency of the save ply') - - g_train.add_argument('--no_gen_mesh', action='store_true') - g_train.add_argument('--no_num_eval', action='store_true') - - g_train.add_argument('--resume_epoch', type=int, default=-1, help='epoch resuming the training') - g_train.add_argument('--continue_train', action='store_true', help='continue training: load the latest model') - - # Testing related - g_test = parser.add_argument_group('Testing') - g_test.add_argument('--resolution', type=int, default=256, help='# of grid in mesh reconstruction') - g_test.add_argument('--test_folder_path', type=str, default=None, help='the folder of test image') - - # Sampling related - g_sample = parser.add_argument_group('Sampling') - g_sample.add_argument('--sigma', type=float, default=5.0, help='perturbation standard deviation for positions') - - g_sample.add_argument('--num_sample_inout', type=int, default=5000, help='# of sampling points') - g_sample.add_argument('--num_sample_color', type=int, default=0, help='# of sampling points') - - g_sample.add_argument('--z_size', type=float, default=200.0, help='z normalization factor') - - # Model related - g_model = parser.add_argument_group('Model') - # General - g_model.add_argument('--norm', type=str, default='group', - help='instance normalization or batch normalization or group normalization') - g_model.add_argument('--norm_color', type=str, default='instance', - help='instance normalization or batch normalization or group normalization') - - # hg filter specify - g_model.add_argument('--num_stack', type=int, default=4, help='# of hourglass') - g_model.add_argument('--num_hourglass', type=int, default=2, help='# of stacked layer of hourglass') - g_model.add_argument('--skip_hourglass', action='store_true', help='skip connection in hourglass') - g_model.add_argument('--hg_down', type=str, default='ave_pool', help='ave pool || conv64 || conv128') - g_model.add_argument('--hourglass_dim', type=int, default='256', help='256 | 512') - - # Classification General - g_model.add_argument('--mlp_dim', nargs='+', default=[257, 1024, 512, 256, 128, 1], type=int, - help='# of dimensions of mlp') - g_model.add_argument('--mlp_dim_color', nargs='+', default=[513, 1024, 512, 256, 128, 3], - type=int, help='# of dimensions of color mlp') - - g_model.add_argument('--use_tanh', action='store_true', - help='using tanh after last conv of image_filter network') - - # for train - parser.add_argument('--random_flip', action='store_true', help='if random flip') - parser.add_argument('--random_trans', action='store_true', help='if random flip') - parser.add_argument('--random_scale', action='store_true', help='if random flip') - parser.add_argument('--no_residual', action='store_true', help='no skip connection in mlp') - parser.add_argument('--schedule', type=int, nargs='+', default=[60, 80], - help='Decrease learning rate at these epochs.') - parser.add_argument('--gamma', type=float, default=0.1, help='LR is multiplied by gamma on schedule.') - parser.add_argument('--color_loss_type', type=str, default='l1', help='mse | l1') - - # for eval - parser.add_argument('--val_test_error', action='store_true', help='validate errors of test data') - parser.add_argument('--val_train_error', action='store_true', help='validate errors of train data') - parser.add_argument('--gen_test_mesh', action='store_true', help='generate test mesh') - parser.add_argument('--gen_train_mesh', action='store_true', help='generate train mesh') - parser.add_argument('--all_mesh', action='store_true', help='generate meshs from all hourglass output') - parser.add_argument('--num_gen_mesh_test', type=int, default=1, - help='how many meshes to generate during testing') - - # path - parser.add_argument('--checkpoints_path', type=str, default='./checkpoints', help='path to save checkpoints') - parser.add_argument('--load_netG_checkpoint_path', type=str, default=None, help='path to save checkpoints') - parser.add_argument('--load_netC_checkpoint_path', type=str, default=None, help='path to save checkpoints') - parser.add_argument('--results_path', type=str, default='./results', help='path to save results ply') - parser.add_argument('--load_checkpoint_path', type=str, help='path to save results ply') - parser.add_argument('--single', type=str, default='', help='single data for training') - # for single image reconstruction - parser.add_argument('--mask_path', type=str, help='path for input mask') - parser.add_argument('--img_path', type=str, help='path for input image') - - # aug - group_aug = parser.add_argument_group('aug') - group_aug.add_argument('--aug_alstd', type=float, default=0.0, help='augmentation pca lighting alpha std') - group_aug.add_argument('--aug_bri', type=float, default=0.0, help='augmentation brightness') - group_aug.add_argument('--aug_con', type=float, default=0.0, help='augmentation contrast') - group_aug.add_argument('--aug_sat', type=float, default=0.0, help='augmentation saturation') - group_aug.add_argument('--aug_hue', type=float, default=0.0, help='augmentation hue') - group_aug.add_argument('--aug_blur', type=float, default=0.0, help='augmentation blur') - - # special tasks - self.initialized = True - return parser - - def gather_options(self): - # initialize parser with basic options - if not self.initialized: - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - self.parser = parser - - return parser.parse_args() - - def print_options(self, opt): - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - def parse(self): - opt = self.gather_options() - return opt - - def parse_to_dict(self): - opt = self.gather_options() - return opt.__dict__ \ No newline at end of file diff --git a/spaces/Multi-chan/amy_project/README.md b/spaces/Multi-chan/amy_project/README.md deleted file mode 100644 index 693b8b7c44d4a4a15fdf1fe850fbb248a5b970f3..0000000000000000000000000000000000000000 --- a/spaces/Multi-chan/amy_project/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Amy Project -emoji: ⚡ -colorFrom: purple -colorTo: pink -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/__init__.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/tools/eval.py b/spaces/NAACL2022/CLIP-Caption-Reward/tools/eval.py deleted file mode 100644 index 881580737fa554344b1b66ab79c4f1de114759ca..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/tools/eval.py +++ /dev/null @@ -1,125 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import json -import numpy as np - -import time -import os -from six.moves import cPickle - -import captioning.utils.opts as opts -import captioning.models as models -from captioning.data.dataloader import * -# from captioning.data.dataloaderraw import * -import captioning.utils.eval_utils as eval_utils -import argparse -import captioning.utils.misc as utils -import captioning.modules.losses as losses -import torch - -# Input arguments and options -parser = argparse.ArgumentParser() -# Input paths -parser.add_argument('--model', type=str, default='', - help='path to model to evaluate') -parser.add_argument('--cnn_model', type=str, default='resnet101', - help='resnet101, resnet152') -parser.add_argument('--infos_path', type=str, default='', - help='path to infos to evaluate') -parser.add_argument('--only_lang_eval', type=int, default=0, - help='lang eval on saved results') -parser.add_argument('--force', type=int, default=0, - help='force to evaluate no matter if there are results available') -parser.add_argument('--device', type=str, default='cuda', - help='cpu or cuda') -opts.add_eval_options(parser) -opts.add_diversity_opts(parser) -opt = parser.parse_args() - -# Load infos -with open(opt.infos_path, 'rb') as f: - infos = utils.pickle_load(f) - -# override and collect parameters -replace = ['input_fc_dir', 'input_att_dir', 'input_box_dir', 'input_label_h5', 'input_json', 'batch_size', 'id'] -ignore = ['start_from'] - -for k in vars(infos['opt']).keys(): - if k in replace: - setattr(opt, k, getattr(opt, k) or getattr(infos['opt'], k, '')) - elif k not in ignore: - if not k in vars(opt): - vars(opt).update({k: vars(infos['opt'])[k]}) # copy over options from model - -vocab = infos['vocab'] # ix -> word mapping - -pred_fn = os.path.join('eval_results/', '.saved_pred_'+ opt.id + '_' + opt.split + '.pth') -result_fn = os.path.join('eval_results/', opt.id + '_' + opt.split + '.json') - -if opt.only_lang_eval == 1 or (not opt.force and os.path.isfile(pred_fn)): - # if results existed, then skip, unless force is on - if not opt.force: - try: - if os.path.isfile(result_fn): - print(result_fn) - json.load(open(result_fn, 'r')) - print('already evaluated') - os._exit(0) - except: - pass - - predictions, n_predictions = torch.load(pred_fn) - lang_stats = eval_utils.language_eval(opt.input_json, predictions, n_predictions, vars(opt), opt.split) - print(lang_stats) - os._exit(0) - -# At this point only_lang_eval if 0 -if not opt.force: - # Check out if - try: - # if no pred exists, then continue - tmp = torch.load(pred_fn) - # if language_eval == 1, and no pred exists, then continue - if opt.language_eval == 1: - json.load(open(result_fn, 'r')) - print('Result is already there') - os._exit(0) - except: - pass - -# Setup the model -opt.vocab = vocab -model = models.setup(opt) -del opt.vocab -model.load_state_dict(torch.load(opt.model, map_location='cpu')) -model.to(opt.device) -model.eval() -crit = losses.LanguageModelCriterion() - -# Create the Data Loader instance -if len(opt.image_folder) == 0: - loader = DataLoader(opt) -else: - loader = DataLoaderRaw({'folder_path': opt.image_folder, - 'coco_json': opt.coco_json, - 'batch_size': opt.batch_size, - 'cnn_model': opt.cnn_model}) -# When eval using provided pretrained model, the vocab may be different from what you have in your cocotalk.json -# So make sure to use the vocab in infos file. -loader.dataset.ix_to_word = infos['vocab'] - - -# Set sample options -opt.dataset = opt.input_json -loss, split_predictions, lang_stats = eval_utils.eval_split(model, crit, loader, - vars(opt)) - -print('loss: ', loss) -if lang_stats: - print(lang_stats) - -if opt.dump_json == 1: - # dump the json - json.dump(split_predictions, open('vis/vis.json', 'w')) diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/ops/roi_ops.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/ops/roi_ops.py deleted file mode 100644 index a21bc7b2882de39b12bc76dacd37047fabac1766..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/ops/roi_ops.py +++ /dev/null @@ -1,237 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""ROI-related ops.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from official.vision.detection.ops import nms -from official.vision.detection.utils import box_utils - - -def multilevel_propose_rois(rpn_boxes, - rpn_scores, - anchor_boxes, - image_shape, - rpn_pre_nms_top_k=2000, - rpn_post_nms_top_k=1000, - rpn_nms_threshold=0.7, - rpn_score_threshold=0.0, - rpn_min_size_threshold=0.0, - decode_boxes=True, - clip_boxes=True, - use_batched_nms=False, - apply_sigmoid_to_score=True): - """Proposes RoIs given a group of candidates from different FPN levels. - - The following describes the steps: - 1. For each individual level: - a. Apply sigmoid transform if specified. - b. Decode boxes if specified. - c. Clip boxes if specified. - d. Filter small boxes and those fall outside image if specified. - e. Apply pre-NMS filtering including pre-NMS top k and score thresholding. - f. Apply NMS. - 2. Aggregate post-NMS boxes from each level. - 3. Apply an overall top k to generate the final selected RoIs. - - Args: - rpn_boxes: a dict with keys representing FPN levels and values representing - box tenors of shape [batch_size, feature_h, feature_w, num_anchors * 4]. - rpn_scores: a dict with keys representing FPN levels and values representing - logit tensors of shape [batch_size, feature_h, feature_w, num_anchors]. - anchor_boxes: a dict with keys representing FPN levels and values - representing anchor box tensors of shape - [batch_size, feature_h, feature_w, num_anchors * 4]. - image_shape: a tensor of shape [batch_size, 2] where the last dimension are - [height, width] of the scaled image. - rpn_pre_nms_top_k: an integer of top scoring RPN proposals *per level* to - keep before applying NMS. Default: 2000. - rpn_post_nms_top_k: an integer of top scoring RPN proposals *in total* to - keep after applying NMS. Default: 1000. - rpn_nms_threshold: a float between 0 and 1 representing the IoU threshold - used for NMS. If 0.0, no NMS is applied. Default: 0.7. - rpn_score_threshold: a float between 0 and 1 representing the minimal box - score to keep before applying NMS. This is often used as a pre-filtering - step for better performance. If 0, no filtering is applied. Default: 0. - rpn_min_size_threshold: a float representing the minimal box size in each - side (w.r.t. the scaled image) to keep before applying NMS. This is often - used as a pre-filtering step for better performance. If 0, no filtering is - applied. Default: 0. - decode_boxes: a boolean indicating whether `rpn_boxes` needs to be decoded - using `anchor_boxes`. If False, use `rpn_boxes` directly and ignore - `anchor_boxes`. Default: True. - clip_boxes: a boolean indicating whether boxes are first clipped to the - scaled image size before appliying NMS. If False, no clipping is applied - and `image_shape` is ignored. Default: True. - use_batched_nms: a boolean indicating whether NMS is applied in batch using - `tf.image.combined_non_max_suppression`. Currently only available in - CPU/GPU. Default: False. - apply_sigmoid_to_score: a boolean indicating whether apply sigmoid to - `rpn_scores` before applying NMS. Default: True. - - Returns: - selected_rois: a tensor of shape [batch_size, rpn_post_nms_top_k, 4], - representing the box coordinates of the selected proposals w.r.t. the - scaled image. - selected_roi_scores: a tensor of shape [batch_size, rpn_post_nms_top_k, 1], - representing the scores of the selected proposals. - """ - with tf.name_scope('multilevel_propose_rois'): - rois = [] - roi_scores = [] - image_shape = tf.expand_dims(image_shape, axis=1) - for level in sorted(rpn_scores.keys()): - with tf.name_scope('level_%d' % level): - _, feature_h, feature_w, num_anchors_per_location = ( - rpn_scores[level].get_shape().as_list()) - - num_boxes = feature_h * feature_w * num_anchors_per_location - this_level_scores = tf.reshape(rpn_scores[level], [-1, num_boxes]) - this_level_boxes = tf.reshape(rpn_boxes[level], [-1, num_boxes, 4]) - this_level_anchors = tf.cast( - tf.reshape(anchor_boxes[level], [-1, num_boxes, 4]), - dtype=this_level_scores.dtype) - - if apply_sigmoid_to_score: - this_level_scores = tf.sigmoid(this_level_scores) - - if decode_boxes: - this_level_boxes = box_utils.decode_boxes( - this_level_boxes, this_level_anchors) - if clip_boxes: - this_level_boxes = box_utils.clip_boxes( - this_level_boxes, image_shape) - - if rpn_min_size_threshold > 0.0: - this_level_boxes, this_level_scores = box_utils.filter_boxes( - this_level_boxes, - this_level_scores, - image_shape, - rpn_min_size_threshold) - - this_level_pre_nms_top_k = min(num_boxes, rpn_pre_nms_top_k) - this_level_post_nms_top_k = min(num_boxes, rpn_post_nms_top_k) - if rpn_nms_threshold > 0.0: - if use_batched_nms: - this_level_rois, this_level_roi_scores, _, _ = ( - tf.image.combined_non_max_suppression( - tf.expand_dims(this_level_boxes, axis=2), - tf.expand_dims(this_level_scores, axis=-1), - max_output_size_per_class=this_level_pre_nms_top_k, - max_total_size=this_level_post_nms_top_k, - iou_threshold=rpn_nms_threshold, - score_threshold=rpn_score_threshold, - pad_per_class=False, - clip_boxes=False)) - else: - if rpn_score_threshold > 0.0: - this_level_boxes, this_level_scores = ( - box_utils.filter_boxes_by_scores( - this_level_boxes, this_level_scores, rpn_score_threshold)) - this_level_boxes, this_level_scores = box_utils.top_k_boxes( - this_level_boxes, this_level_scores, k=this_level_pre_nms_top_k) - this_level_roi_scores, this_level_rois = ( - nms.sorted_non_max_suppression_padded( - this_level_scores, - this_level_boxes, - max_output_size=this_level_post_nms_top_k, - iou_threshold=rpn_nms_threshold)) - else: - this_level_rois, this_level_roi_scores = box_utils.top_k_boxes( - this_level_rois, - this_level_scores, - k=this_level_post_nms_top_k) - - rois.append(this_level_rois) - roi_scores.append(this_level_roi_scores) - - all_rois = tf.concat(rois, axis=1) - all_roi_scores = tf.concat(roi_scores, axis=1) - - with tf.name_scope('top_k_rois'): - _, num_valid_rois = all_roi_scores.get_shape().as_list() - overall_top_k = min(num_valid_rois, rpn_post_nms_top_k) - - selected_rois, selected_roi_scores = box_utils.top_k_boxes( - all_rois, all_roi_scores, k=overall_top_k) - - return selected_rois, selected_roi_scores - - -class ROIGenerator(object): - """Proposes RoIs for the second stage processing.""" - - def __init__(self, params): - self._rpn_pre_nms_top_k = params.rpn_pre_nms_top_k - self._rpn_post_nms_top_k = params.rpn_post_nms_top_k - self._rpn_nms_threshold = params.rpn_nms_threshold - self._rpn_score_threshold = params.rpn_score_threshold - self._rpn_min_size_threshold = params.rpn_min_size_threshold - self._test_rpn_pre_nms_top_k = params.test_rpn_pre_nms_top_k - self._test_rpn_post_nms_top_k = params.test_rpn_post_nms_top_k - self._test_rpn_nms_threshold = params.test_rpn_nms_threshold - self._test_rpn_score_threshold = params.test_rpn_score_threshold - self._test_rpn_min_size_threshold = params.test_rpn_min_size_threshold - self._use_batched_nms = params.use_batched_nms - - def __call__(self, boxes, scores, anchor_boxes, image_shape, is_training): - """Generates RoI proposals. - - Args: - boxes: a dict with keys representing FPN levels and values representing - box tenors of shape [batch_size, feature_h, feature_w, num_anchors * 4]. - scores: a dict with keys representing FPN levels and values representing - logit tensors of shape [batch_size, feature_h, feature_w, num_anchors]. - anchor_boxes: a dict with keys representing FPN levels and values - representing anchor box tensors of shape - [batch_size, feature_h, feature_w, num_anchors * 4]. - image_shape: a tensor of shape [batch_size, 2] where the last dimension - are [height, width] of the scaled image. - is_training: a bool indicating whether it is in training or inference - mode. - - Returns: - proposed_rois: a tensor of shape [batch_size, rpn_post_nms_top_k, 4], - representing the box coordinates of the proposed RoIs w.r.t. the - scaled image. - proposed_roi_scores: a tensor of shape - [batch_size, rpn_post_nms_top_k, 1], representing the scores of the - proposed RoIs. - - """ - proposed_rois, proposed_roi_scores = multilevel_propose_rois( - boxes, - scores, - anchor_boxes, - image_shape, - rpn_pre_nms_top_k=(self._rpn_pre_nms_top_k if is_training - else self._test_rpn_pre_nms_top_k), - rpn_post_nms_top_k=(self._rpn_post_nms_top_k if is_training - else self._test_rpn_post_nms_top_k), - rpn_nms_threshold=(self._rpn_nms_threshold if is_training - else self._test_rpn_nms_threshold), - rpn_score_threshold=(self._rpn_score_threshold if is_training - else self._test_rpn_score_threshold), - rpn_min_size_threshold=(self._rpn_min_size_threshold if is_training - else self._test_rpn_min_size_threshold), - decode_boxes=True, - clip_boxes=True, - use_batched_nms=self._use_batched_nms, - apply_sigmoid_to_score=True) - return proposed_rois, proposed_roi_scores diff --git a/spaces/Narrativaai/NLLB-Translator/ui.py b/spaces/Narrativaai/NLLB-Translator/ui.py deleted file mode 100644 index 6e2876bd9f2ee7b3e801d9985e1b83a1d4be347c..0000000000000000000000000000000000000000 --- a/spaces/Narrativaai/NLLB-Translator/ui.py +++ /dev/null @@ -1,13 +0,0 @@ -title = "NLLB TRANSLATION Demo" -description = """ -

    -

    -Translator using Facebook's NLLB models. -Developed by Narrativa. -meta nllb pic -
    -

    -""" - -examples = [["I love to test latest translation models by META at HuggingFace thanks to Narrativa", - "eng_Latn", "spa_Latn", 400]] diff --git a/spaces/NbAiLab/maken-clip-text/README.md b/spaces/NbAiLab/maken-clip-text/README.md deleted file mode 100644 index b7038c43128a3f9c0f8c799bfa00cf0f00dc21d4..0000000000000000000000000000000000000000 --- a/spaces/NbAiLab/maken-clip-text/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Maken Clip Text -emoji: 📜 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Nee001/bing0/Dockerfile b/spaces/Nee001/bing0/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/NeuralInternet/Text-Generation_Playground/extensions/gallery/script.py b/spaces/NeuralInternet/Text-Generation_Playground/extensions/gallery/script.py deleted file mode 100644 index 8a2d7cf988734a7ab0966d047ff3d31ba58324b7..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Text-Generation_Playground/extensions/gallery/script.py +++ /dev/null @@ -1,82 +0,0 @@ -from pathlib import Path - -import gradio as gr - -from modules.html_generator import get_image_cache - - -def generate_html(): - css = """ - .character-gallery { - margin: 1rem 0; - display: grid; - grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); - grid-column-gap: 0.4rem; - grid-row-gap: 1.2rem; - } - - .character-container { - cursor: pointer; - text-align: center; - position: relative; - opacity: 0.85; - } - - .character-container:hover { - opacity: 1; - } - - .character-container .placeholder, .character-container img { - width: 150px; - height: 200px; - background-color: gray; - object-fit: cover; - margin: 0 auto; - border-radius: 1rem; - border: 3px solid white; - box-shadow: 3px 3px 6px 0px rgb(0 0 0 / 50%); - } - - .character-name { - margin-top: 0.3rem; - display: block; - font-size: 1.2rem; - font-weight: 600; - overflow-wrap: anywhere; - } - """ - - container_html = f'" - return container_html - -def ui(): - with gr.Accordion("Character gallery"): - update = gr.Button("Refresh") - gallery = gr.HTML(value=generate_html()) - update.click(generate_html, [], gallery) diff --git a/spaces/NoCrypt/pixelization/pixelization.py b/spaces/NoCrypt/pixelization/pixelization.py deleted file mode 100644 index 54f7701e4cc517fd57b4f48de193f17d699330ce..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/pixelization/pixelization.py +++ /dev/null @@ -1,214 +0,0 @@ -import os -import torch -import torchvision.transforms as transforms -from PIL import Image -import numpy as np -from models.networks import define_G -import glob - - -pixelize_code = [ - 233356.8125, -27387.5918, -32866.8008, 126575.0312, -181590.0156, - -31543.1289, 50374.1289, 99631.4062, -188897.3750, 138322.7031, - -107266.2266, 125778.5781, 42416.1836, 139710.8594, -39614.6250, - -69972.6875, -21886.4141, 86938.4766, 31457.6270, -98892.2344, - -1191.5887, -61662.1719, -180121.9062, -32931.0859, 43109.0391, - 21490.1328, -153485.3281, 94259.1797, 43103.1992, -231953.8125, - 52496.7422, 142697.4062, -34882.7852, -98740.0625, 34458.5078, - -135436.3438, 11420.5488, -18895.8984, -71195.4141, 176947.2344, - -52747.5742, 109054.6562, -28124.9473, -17736.6152, -41327.1562, - 69853.3906, 79046.2656, -3923.7344, -5644.5229, 96586.7578, - -89315.2656, -146578.0156, -61862.1484, -83956.4375, 87574.5703, - -75055.0469, 19571.8203, 79358.7891, -16501.5000, -147169.2188, - -97861.6797, 60442.1797, 40156.9023, 223136.3906, -81118.0547, - -221443.6406, 54911.6914, 54735.9258, -58805.7305, -168884.4844, - 40865.9609, -28627.9043, -18604.7227, 120274.6172, 49712.2383, - 164402.7031, -53165.0820, -60664.0469, -97956.1484, -121468.4062, - -69926.1484, -4889.0151, 127367.7344, 200241.0781, -85817.7578, - -143190.0625, -74049.5312, 137980.5781, -150788.7656, -115719.6719, - -189250.1250, -153069.7344, -127429.7891, -187588.2500, 125264.7422, - -79082.3438, -114144.5781, 36033.5039, -57502.2188, 80488.1562, - 36501.4570, -138817.5938, -22189.6523, -222146.9688, -73292.3984, - 127717.2422, -183836.3750, -105907.0859, 145422.8750, 66981.2031, - -9596.6699, 78099.4922, 70226.3359, 35841.8789, -116117.6016, - -150986.0156, 81622.4922, 113575.0625, 154419.4844, 53586.4141, - 118494.8750, 131625.4375, -19763.1094, 75581.1172, -42750.5039, - 97934.8281, 6706.7949, -101179.0078, 83519.6172, -83054.8359, - -56749.2578, -30683.6992, 54615.9492, 84061.1406, -229136.7188, - -60554.0000, 8120.2622, -106468.7891, -28316.3418, -166351.3125, - 47797.3984, 96013.4141, 71482.9453, -101429.9297, 209063.3594, - -3033.6882, -38952.5352, -84920.6719, -5895.1543, -18641.8105, - 47884.3633, -14620.0273, -132898.6719, -40903.5859, 197217.3750, - -128599.1328, -115397.8906, -22670.7676, -78569.9688, -54559.7070, - -106855.2031, 40703.1484, 55568.3164, 60202.9844, -64757.9375, - -32068.8652, 160663.3438, 72187.0703, -148519.5469, 162952.8906, - -128048.2031, -136153.8906, -15270.3730, -52766.3281, -52517.4531, - 18652.1992, 195354.2188, -136657.3750, -8034.2622, -92699.6016, - -129169.1406, 188479.9844, 46003.7500, -93383.0781, -67831.6484, - -66710.5469, 104338.5234, 85878.8438, -73165.2031, 95857.3203, - 71213.1250, 94603.1094, -30359.8125, -107989.2578, 99822.1719, - 184626.3594, 79238.4531, -272978.9375, -137948.5781, -145245.8125, - 75359.2031, 26652.7930, 50421.4141, 60784.4102, -18286.3398, - -182851.9531, -87178.7969, -13131.7539, 195674.8906, 59951.7852, - 124353.7422, -36709.1758, -54575.4766, 77822.6953, 43697.4102, - -64394.3438, 113281.1797, -93987.0703, 221989.7188, 132902.5000, - -9538.8574, -14594.1338, 65084.9453, -12501.7227, 130330.6875, - -115123.4766, 20823.0898, 75512.4922, -75255.7422, -41936.7656, - -186678.8281, -166799.9375, 138770.6250, -78969.9531, 124516.8047, - -85558.5781, -69272.4375, -115539.1094, 228774.4844, -76529.3281, - -107735.8906, -76798.8906, -194335.2812, 56530.5742, -9397.7529, - 132985.8281, 163929.8438, -188517.7969, -141155.6406, 45071.0391, - 207788.3125, -125826.1172, 8965.3320, -159584.8438, 95842.4609, - -76929.4688 -] - - - -class Model(): - def __init__(self, device="cpu"): - self.device = torch.device(device) - self.G_A_net = None - self.alias_net = None - - def load(self): - with torch.no_grad(): - self.G_A_net = define_G(3, 3, 64, "c2pGen", "instance", False, "normal", 0.02, [0]) - self.alias_net = define_G(3, 3, 64, "antialias", "instance", False, "normal", 0.02, [0]) - - G_A_state = torch.load("160_net_G_A.pth" if not os.environ['NET_MODEL'] else os.environ['NET_MODEL'], map_location=str(self.device)) - for p in list(G_A_state.keys()): - G_A_state["module."+str(p)] = G_A_state.pop(p) - self.G_A_net.load_state_dict(G_A_state) - - alias_state = torch.load("alias_net.pth" if not os.environ['ALIAS_MODEL'] else os.environ['ALIAS_MODEL'], map_location=str(self.device)) - for p in list(alias_state.keys()): - alias_state["module."+str(p)] = alias_state.pop(p) - self.alias_net.load_state_dict(alias_state) - - def pixelize(self, in_img, out_img): - with torch.no_grad(): - in_img = Image.open(in_img).convert('RGB') - in_t = process(in_img).to(self.device) - - out_t = self.alias_net(self.G_A_net(in_t, self.ref_t)) - - save(out_t, out_img) - - def pixelize_modified(self, in_img, pixel_size, upscale_after) -> Image.Image: - with torch.no_grad(): - in_img = in_img.convert('RGB') - - # limit in_img size to 1024x1024 to maintain performance - if in_img.size[0] > 1024 or in_img.size[1] > 1024: - in_img.thumbnail((1024, 1024), Image.NEAREST) - - # Killing inspect element users, I know what you're doing lol. - pixel_size = pixel_size if pixel_size >= 4 else 4 - - in_img = in_img.resize((in_img.size[0] * 4 // pixel_size, in_img.size[1] * 4 // pixel_size)) - in_t = process(in_img).to(self.device) - - - # out_t = self.alias_net(self.G_A_net(in_t, self.ref_t)) - feature = self.G_A_net.module.RGBEnc(in_t) - code = torch.asarray(pixelize_code, device=self.device).reshape((1, 256, 1, 1)) - adain_params = self.G_A_net.module.MLP(code) - images = self.G_A_net.module.RGBDec(feature, adain_params) - out_t = self.alias_net(images) - - - img = to_image(out_t, pixel_size, upscale_after) - return img - -def to_image(tensor, pixel_size, upscale_after): - img = tensor.data[0].cpu().float().numpy() - img = (np.transpose(img, (1, 2, 0)) + 1) / 2.0 * 255.0 - img = img.astype(np.uint8) - img = Image.fromarray(img) - img = img.resize((img.size[0]//4, img.size[1]//4), resample=Image.Resampling.NEAREST) - if upscale_after: - img = img.resize((img.size[0]*pixel_size, img.size[1]*pixel_size), resample=Image.Resampling.NEAREST) - - return img - - -def greyscale(img): - gray = np.array(img.convert('L')) - tmp = np.expand_dims(gray, axis=2) - tmp = np.concatenate((tmp, tmp, tmp), axis=-1) - return Image.fromarray(tmp) - -def process(img): - ow,oh = img.size - - nw = int(round(ow / 4) * 4) - nh = int(round(oh / 4) * 4) - - left = (ow - nw)//2 - top = (oh - nh)//2 - right = left + nw - bottom = top + nh - - img = img.crop((left, top, right, bottom)) - - trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) - - return trans(img)[None, :, :, :] - -def save(tensor, file): - img = tensor.data[0].cpu().float().numpy() - img = (np.transpose(img, (1, 2, 0)) + 1) / 2.0 * 255.0 - img = img.astype(np.uint8) - img = Image.fromarray(img) - img = img.resize((img.size[0]//4, img.size[1]//4), resample=Image.Resampling.NEAREST) - img = img.resize((img.size[0]*4, img.size[1]*4), resample=Image.Resampling.NEAREST) - img.save(file) - -def pixelize_cli(): - import argparse - import os - parser = argparse.ArgumentParser(description='Pixelization') - parser.add_argument('--input', type=str, default=None, required=True, help='path to image or directory') - parser.add_argument('--output', type=str, default=None, required=False, help='path to save image/images') - parser.add_argument('--cpu', action='store_true', help='use CPU instead of GPU') - - args = parser.parse_args() - in_path = args.input - out_path = args.output - use_cpu = args.cpu - - if not os.path.exists("alias_net.pth" if not os.environ['ALIAS_MODEL'] else os.environ['ALIAS_MODEL']): - print("missing models") - - pairs = [] - - if os.path.isdir(in_path): - in_images = glob.glob(in_path + "/*.png") + glob.glob(in_path + "/*.jpg") - if not out_path: - out_path = os.path.join(in_path, "outputs") - if not os.path.exists(out_path): - os.makedirs(out_path) - elif os.path.isfile(out_path): - print("output cant be a file if input is a directory") - return - for i in in_images: - pairs += [(i, i.replace(in_path, out_path))] - elif os.path.isfile(in_path): - if not out_path: - base, ext = os.path.splitext(in_path) - out_path = base+"_pixelized"+ext - else: - if os.path.isdir(out_path): - _, file = os.path.split(in_path) - out_path = os.path.join(out_path, file) - pairs = [(in_path, out_path)] - - m = Model(device = "cpu" if use_cpu else "cuda") - m.load() - - for in_file, out_file in pairs: - print("PROCESSING", in_file, "TO", out_file) - m.pixelize(in_file, out_file) - -if __name__ == "__main__": - pixelize_cli() \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/generate_waveform.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/generate_waveform.py deleted file mode 100644 index bfc2ef8eb3d91366caf7609d75aa1795ab0ed8f9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/generate_waveform.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import matplotlib.pyplot as plt -import numpy as np -from pathlib import Path -import soundfile as sf -import sys -import torch -import torchaudio - -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.logging import progress_bar -from fairseq.tasks.text_to_speech import plot_tts_output -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDataset - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def make_parser(): - parser = options.get_speech_generation_parser() - parser.add_argument("--dump-features", action="store_true") - parser.add_argument("--dump-waveforms", action="store_true") - parser.add_argument("--dump-attentions", action="store_true") - parser.add_argument("--dump-eos-probs", action="store_true") - parser.add_argument("--dump-plots", action="store_true") - parser.add_argument("--dump-target", action="store_true") - parser.add_argument("--output-sample-rate", default=22050, type=int) - parser.add_argument("--teacher-forcing", action="store_true") - parser.add_argument( - "--audio-format", type=str, default="wav", choices=["wav", "flac"] - ) - return parser - - -def postprocess_results( - dataset: TextToSpeechDataset, sample, hypos, resample_fn, dump_target -): - def to_np(x): - return None if x is None else x.detach().cpu().numpy() - - sample_ids = [dataset.ids[i] for i in sample["id"].tolist()] - texts = sample["src_texts"] - attns = [to_np(hypo["attn"]) for hypo in hypos] - eos_probs = [to_np(hypo.get("eos_prob", None)) for hypo in hypos] - feat_preds = [to_np(hypo["feature"]) for hypo in hypos] - wave_preds = [to_np(resample_fn(h["waveform"])) for h in hypos] - if dump_target: - feat_targs = [to_np(hypo["targ_feature"]) for hypo in hypos] - wave_targs = [to_np(resample_fn(h["targ_waveform"])) for h in hypos] - else: - feat_targs = [None for _ in hypos] - wave_targs = [None for _ in hypos] - - return zip(sample_ids, texts, attns, eos_probs, feat_preds, wave_preds, - feat_targs, wave_targs) - - -def dump_result( - is_na_model, - args, - vocoder, - sample_id, - text, - attn, - eos_prob, - feat_pred, - wave_pred, - feat_targ, - wave_targ, -): - sample_rate = args.output_sample_rate - out_root = Path(args.results_path) - if args.dump_features: - feat_dir = out_root / "feat" - feat_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_dir / f"{sample_id}.npy", feat_pred) - if args.dump_target: - feat_tgt_dir = out_root / "feat_tgt" - feat_tgt_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_tgt_dir / f"{sample_id}.npy", feat_targ) - if args.dump_attentions: - attn_dir = out_root / "attn" - attn_dir.mkdir(exist_ok=True, parents=True) - np.save(attn_dir / f"{sample_id}.npy", attn.numpy()) - if args.dump_eos_probs and not is_na_model: - eos_dir = out_root / "eos" - eos_dir.mkdir(exist_ok=True, parents=True) - np.save(eos_dir / f"{sample_id}.npy", eos_prob) - - if args.dump_plots: - images = [feat_pred.T] if is_na_model else [feat_pred.T, attn] - names = ["output"] if is_na_model else ["output", "alignment"] - if feat_targ is not None: - images = [feat_targ.T] + images - names = [f"target (idx={sample_id})"] + names - if is_na_model: - plot_tts_output(images, names, attn, "alignment", suptitle=text) - else: - plot_tts_output(images, names, eos_prob, "eos prob", suptitle=text) - plot_dir = out_root / "plot" - plot_dir.mkdir(exist_ok=True, parents=True) - plt.savefig(plot_dir / f"{sample_id}.png") - plt.close() - - if args.dump_waveforms: - ext = args.audio_format - if wave_pred is not None: - wav_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}" - wav_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_dir / f"{sample_id}.{ext}", wave_pred, sample_rate) - if args.dump_target and wave_targ is not None: - wav_tgt_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}_tgt" - wav_tgt_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_tgt_dir / f"{sample_id}.{ext}", wave_targ, sample_rate) - - -def main(args): - assert(args.dump_features or args.dump_waveforms or args.dump_attentions - or args.dump_eos_probs or args.dump_plots) - if args.max_tokens is None and args.batch_size is None: - args.max_tokens = 8000 - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - task = tasks.setup_task(args) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], - task=task, - ) - model = models[0].cuda() if use_cuda else models[0] - # use the original n_frames_per_step - task.args.n_frames_per_step = saved_cfg.task.n_frames_per_step - task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task) - - data_cfg = task.data_cfg - sample_rate = data_cfg.config.get("features", {}).get("sample_rate", 22050) - resample_fn = { - False: lambda x: x, - True: lambda x: torchaudio.sox_effects.apply_effects_tensor( - x.detach().cpu().unsqueeze(0), sample_rate, - [['rate', str(args.output_sample_rate)]] - )[0].squeeze(0) - }.get(args.output_sample_rate != sample_rate) - if args.output_sample_rate != sample_rate: - logger.info(f"resampling to {args.output_sample_rate}Hz") - - generator = task.build_generator([model], args) - itr = task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - data_buffer_size=args.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - Path(args.results_path).mkdir(exist_ok=True, parents=True) - is_na_model = getattr(model, "NON_AUTOREGRESSIVE", False) - dataset = task.dataset(args.gen_subset) - vocoder = task.args.vocoder - with progress_bar.build_progress_bar(args, itr) as t: - for sample in t: - sample = utils.move_to_cuda(sample) if use_cuda else sample - hypos = generator.generate(model, sample, has_targ=args.dump_target) - for result in postprocess_results( - dataset, sample, hypos, resample_fn, args.dump_target - ): - dump_result(is_na_model, args, vocoder, *result) - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_input.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_input.py deleted file mode 100644 index 446534a9f8b87337a4dd752944ea386ff7cf7965..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_input.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import List - -import torch -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class AdaptiveInput(nn.Module): - def __init__( - self, - vocab_size: int, - padding_idx: int, - initial_dim: int, - factor: float, - output_dim: int, - cutoff: List[int], - q_noise: float = 0, - qn_block_size: int = 8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - self.cutoff = cutoff - self.embedding_dim = output_dim - self.padding_idx = padding_idx - - self.embeddings = nn.ModuleList() - for i in range(len(self.cutoff)): - prev = self.cutoff[i - 1] if i > 0 else 0 - size = self.cutoff[i] - prev - dim = int(initial_dim // (factor ** i)) - seq = nn.Sequential( - nn.Embedding(size, dim, self.padding_idx), - quant_noise( - nn.Linear(dim, output_dim, bias=False), q_noise, qn_block_size - ), - ) - - self.embeddings.append(seq) - self.padding_idx = None - self.padding_idx = padding_idx - - def init_weights(m): - if isinstance(m, nn.Embedding): - nn.init.normal_(m.weight, mean=0, std=m.weight.shape[1] ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - elif hasattr(m, "weight"): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def weights_for_band(self, band: int): - return self.embeddings[band][0].weight, self.embeddings[band][1].weight - - def forward(self, input: torch.Tensor): - result = self._float_tensor.new(input.shape + (self.embedding_dim,)) - for i in range(len(self.cutoff)): - mask = input.lt(self.cutoff[i]) - if i > 0: - mask.mul_(input.ge(self.cutoff[i - 1])) - chunk_input = input[mask] - self.cutoff[i - 1] - else: - chunk_input = input[mask] - if mask.any(): - result[mask] = self.embeddings[i](chunk_input) - return result diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_lm.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_lm.py deleted file mode 100644 index c6246a0c0e338fa36244b3aa4fb57f189fbffcb6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_lm.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, metadata={"help": "max sequence length"} - ) - add_bos_token: bool = False - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_lm", dataclass=DummyLMConfig) -class DummyLMTask(FairseqTask): - def __init__(self, cfg: DummyLMConfig): - super().__init__(cfg) - - # load dictionary - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - logger.info("dictionary: {} types".format(len(self.dictionary))) - - seq = torch.arange(cfg.tokens_per_sample + 1) + self.dictionary.pad() + 1 - - self.dummy_src = seq[:-1] - self.dummy_tgt = seq[1:] - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-Text2Image_Generation/index.html b/spaces/OFA-Sys/OFA-Text2Image_Generation/index.html deleted file mode 100644 index bb503531d549f9a5751a8c9c5034dab41a4fec69..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Text2Image_Generation/index.html +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - OFA-Image_Generation - - - -
    -

    OFA-Text2Image_Generation

    -

    Click - HERE for the interactive demo! Enjoy generating your own images!

    -

    Note that the generation takes around 105 seconds. Take a break while waiting :)

    - -
    - - diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/laser_lstm.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/laser_lstm.py deleted file mode 100644 index 10df90e002d5a7dd74a571dbc3b328c130c57a0a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/laser/laser_src/laser_lstm.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import options, utils - -from fairseq.models import ( - FairseqEncoder, - FairseqIncrementalDecoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) - - -@register_model("laser_lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens=None, - tgt_tokens=None, - tgt_lengths=None, - target_language_id=None, - dataset_name="", - ): - assert target_language_id is not None - - src_encoder_out = self.encoder(src_tokens, src_lengths, dataset_name) - return self.decoder( - prev_output_tokens, src_encoder_out, lang_id=target_language_id - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-hidden-size", type=int, metavar="N", help="encoder hidden size" - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="number of encoder layers" - ) - parser.add_argument( - "--encoder-bidirectional", - action="store_true", - help="make all layers of encoder bidirectional", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-hidden-size", type=int, metavar="N", help="decoder hidden size" - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="number of decoder layers" - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--decoder-zero-init", - type=str, - metavar="BOOL", - help="initialize the decoder hidden/cell state to zero", - ) - parser.add_argument( - "--decoder-lang-embed-dim", - type=int, - metavar="N", - help="decoder language embedding dimension", - ) - parser.add_argument( - "--fixed-embeddings", - action="store_true", - help="keep embeddings fixed (ENCODER ONLY)", - ) # TODO Also apply to decoder embeddings? - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument( - "--encoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for encoder input embedding", - ) - parser.add_argument( - "--encoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for encoder output", - ) - parser.add_argument( - "--decoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for decoder input embedding", - ) - parser.add_argument( - "--decoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for decoder output", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - pretrained_encoder_embed = None - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim - ) - - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - fixed_embeddings=args.fixed_embeddings, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - zero_init=options.eval_bool(args.decoder_zero_init), - encoder_embed_dim=args.encoder_embed_dim, - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - num_langs=num_langs, - lang_embed_dim=args.decoder_lang_embed_dim, - ) - return cls(encoder, decoder) - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_value=0.0, - fixed_embeddings=False, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.bidirectional = bidirectional - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - if fixed_embeddings: - self.embed_tokens.weight.requires_grad = False - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - self.padding_value = padding_value - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward(self, src_tokens, src_lengths, dataset_name): - if self.left_pad: - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - self.padding_idx, - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - try: - packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist()) - except BaseException: - raise Exception(f"Packing failed in dataset {dataset_name}") - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.data.new(*state_size).zero_() - c0 = x.data.new(*state_size).zero_() - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_value - ) - x = F.dropout(x, p=self.dropout_out, training=self.training) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - - def combine_bidir(outs): - return torch.cat( - [ - torch.cat([outs[2 * i], outs[2 * i + 1]], dim=0).view( - 1, bsz, self.output_units - ) - for i in range(self.num_layers) - ], - dim=0, - ) - - final_hiddens = combine_bidir(final_hiddens) - final_cells = combine_bidir(final_cells) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - # Set padded outputs to -inf so they are not selected by max-pooling - padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1) - if padding_mask.any(): - x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x) - - # Build the sentence embedding by max-pooling over the encoder outputs - sentemb = x.max(dim=0)[0] - - return { - "sentemb": sentemb, - "encoder_out": (x, final_hiddens, final_cells), - "encoder_padding_mask": encoder_padding_mask - if encoder_padding_mask.any() - else None, - } - - def reorder_encoder_out(self, encoder_out_dict, new_order): - encoder_out_dict["sentemb"] = encoder_out_dict["sentemb"].index_select( - 0, new_order - ) - encoder_out_dict["encoder_out"] = tuple( - eo.index_select(1, new_order) for eo in encoder_out_dict["encoder_out"] - ) - if encoder_out_dict["encoder_padding_mask"] is not None: - encoder_out_dict["encoder_padding_mask"] = encoder_out_dict[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out_dict - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return int(1e5) # an arbitrary large number - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - zero_init=False, - encoder_embed_dim=512, - encoder_output_units=512, - pretrained_embed=None, - num_langs=1, - lang_embed_dim=0, - ): - super().__init__(dictionary) - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=encoder_output_units + embed_dim + lang_embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - if zero_init: - self.sentemb2init = None - else: - self.sentemb2init = Linear( - encoder_output_units, 2 * num_layers * hidden_size - ) - - if lang_embed_dim == 0: - self.embed_lang = None - else: - self.embed_lang = nn.Embedding(num_langs, lang_embed_dim) - nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1) - - def forward( - self, prev_output_tokens, encoder_out_dict, incremental_state=None, lang_id=0 - ): - sentemb = encoder_out_dict["sentemb"] - encoder_out = encoder_out_dict["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - # get outputs from encoder - encoder_outs, _, _ = encoder_out[:3] - srclen = encoder_outs.size(0) - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # embed language identifier - if self.embed_lang is not None: - lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id) - langemb = self.embed_lang(lang_ids) - # TODO Should we dropout here??? - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells, input_feed = cached_state - else: - num_layers = len(self.layers) - if self.sentemb2init is None: - prev_hiddens = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - prev_cells = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - else: - init = self.sentemb2init(sentemb) - prev_hiddens = [ - init[:, (2 * i) * self.hidden_size : (2 * i + 1) * self.hidden_size] - for i in range(num_layers) - ] - prev_cells = [ - init[ - :, - (2 * i + 1) * self.hidden_size : (2 * i + 2) * self.hidden_size, - ] - for i in range(num_layers) - ] - input_feed = x.data.new(bsz, self.hidden_size).zero_() - - attn_scores = x.data.new(srclen, seqlen, bsz).zero_() - outs = [] - for j in range(seqlen): - if self.embed_lang is None: - input = torch.cat((x[j, :, :], sentemb), dim=1) - else: - input = torch.cat((x[j, :, :], sentemb, langemb), dim=1) - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = F.dropout(hidden, p=self.dropout_out, training=self.training) - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - out = hidden - out = F.dropout(out, p=self.dropout_out, training=self.training) - - # input feeding - input_feed = out - - # save final output - outs.append(out) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, - incremental_state, - "cached_state", - (prev_hiddens, prev_cells, input_feed), - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - attn_scores = attn_scores.transpose(0, 2) - - # project back to size of vocabulary - if hasattr(self, "additional_fc"): - x = self.additional_fc(x) - x = F.dropout(x, p=self.dropout_out, training=self.training) - x = self.fc_out(x) - - return x, attn_scores - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return int(1e5) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("laser_lstm", "laser_lstm") -def base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.decoder_zero_init = getattr(args, "decoder_zero_init", "0") - args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0) - args.fixed_embeddings = getattr(args, "fixed_embeddings", False) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/byte_bpe.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/byte_bpe.py deleted file mode 100644 index 31e3a0627827f19ca7f0b58da45e46d40a80c3bf..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/byte_bpe.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.data.encoders.byte_utils import ( - SPACE, - SPACE_ESCAPE, - byte_encode, - smart_byte_decode, -) -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class ByteBpeConfig(FairseqDataclass): - sentencepiece_model_path: str = field( - default="???", metadata={"help": "path to sentencepiece model"} - ) - - -@register_bpe("byte_bpe", dataclass=ByteBpeConfig) -class ByteBPE(object): - def __init__(self, cfg): - vocab = file_utils.cached_path(cfg.sentencepiece_model_path) - try: - import sentencepiece as spm - - self.sp = spm.SentencePieceProcessor() - self.sp.Load(vocab) - except ImportError: - raise ImportError( - "Please install sentencepiece with: pip install sentencepiece" - ) - - def encode(self, x: str) -> str: - byte_encoded = byte_encode(x) - return SPACE.join(self.sp.EncodeAsPieces(byte_encoded)) - - @staticmethod - def decode(x: str) -> str: - unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) - return smart_byte_decode(unescaped) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/gumbel_vector_quantizer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/gumbel_vector_quantizer.py deleted file mode 100644 index 71134388889d7f224655957256e78fd6c02d72a3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/gumbel_vector_quantizer.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class GumbelVectorQuantizer(nn.Module): - def __init__( - self, - dim, - num_vars, - temp, - groups, - combine_groups, - vq_dim, - time_first, - activation=nn.GELU(), - weight_proj_depth=1, - weight_proj_factor=1, - ): - """Vector quantization using gumbel softmax - - Args: - dim: input dimension (channels) - num_vars: number of quantized vectors per group - temp: temperature for training. this should be a tuple of 3 elements: (start, stop, decay factor) - groups: number of groups for vector quantization - combine_groups: whether to use the vectors for all groups - vq_dim: dimensionality of the resulting quantized vector - time_first: if true, expect input in BxTxC format, otherwise in BxCxT - activation: what activation to use (should be a module). this is only used if weight_proj_depth is > 1 - weight_proj_depth: number of layers (with activation in between) to project input before computing logits - weight_proj_factor: this is used only if weight_proj_depth is > 1. scales the inner dimensionality of - projections by this factor - """ - super().__init__() - - self.groups = groups - self.combine_groups = combine_groups - self.input_dim = dim - self.num_vars = num_vars - self.time_first = time_first - - assert ( - vq_dim % groups == 0 - ), f"dim {vq_dim} must be divisible by groups {groups} for concatenation" - - var_dim = vq_dim // groups - num_groups = groups if not combine_groups else 1 - - self.vars = nn.Parameter(torch.FloatTensor(1, num_groups * num_vars, var_dim)) - nn.init.uniform_(self.vars) - - if weight_proj_depth > 1: - - def block(input_dim, output_dim): - return nn.Sequential(nn.Linear(input_dim, output_dim), activation) - - inner_dim = self.input_dim * weight_proj_factor - self.weight_proj = nn.Sequential( - *[ - block(self.input_dim if i == 0 else inner_dim, inner_dim) - for i in range(weight_proj_depth - 1) - ], - nn.Linear(inner_dim, groups * num_vars), - ) - else: - self.weight_proj = nn.Linear(self.input_dim, groups * num_vars) - nn.init.normal_(self.weight_proj.weight, mean=0, std=1) - nn.init.zeros_(self.weight_proj.bias) - - if isinstance(temp, str): - import ast - temp = ast.literal_eval(temp) - assert len(temp) == 3, f"{temp}, {len(temp)}" - - self.max_temp, self.min_temp, self.temp_decay = temp - self.curr_temp = self.max_temp - self.codebook_indices = None - - def set_num_updates(self, num_updates): - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def get_codebook_indices(self): - if self.codebook_indices is None: - from itertools import product - - p = [range(self.num_vars)] * self.groups - inds = list(product(*p)) - self.codebook_indices = torch.tensor( - inds, dtype=torch.long, device=self.vars.device - ).flatten() - - if not self.combine_groups: - self.codebook_indices = self.codebook_indices.view( - self.num_vars ** self.groups, -1 - ) - for b in range(1, self.groups): - self.codebook_indices[:, b] += self.num_vars * b - self.codebook_indices = self.codebook_indices.flatten() - return self.codebook_indices - - def codebook(self): - indices = self.get_codebook_indices() - return ( - self.vars.squeeze(0) - .index_select(0, indices) - .view(self.num_vars ** self.groups, -1) - ) - - def sample_from_codebook(self, b, n): - indices = self.get_codebook_indices() - indices = indices.view(-1, self.groups) - cb_size = indices.size(0) - assert ( - n < cb_size - ), f"sample size {n} is greater than size of codebook {cb_size}" - sample_idx = torch.randint(low=0, high=cb_size, size=(b * n,)) - indices = indices[sample_idx] - - z = self.vars.squeeze(0).index_select(0, indices.flatten()).view(b, n, -1) - return z - - def to_codebook_index(self, indices): - res = indices.new_full(indices.shape[:-1], 0) - for i in range(self.groups): - exponent = self.groups - i - 1 - res += indices[..., i] * (self.num_vars ** exponent) - return res - - def forward_idx(self, x): - res = self.forward(x, produce_targets=True) - return res["x"], res["targets"] - - def forward(self, x, produce_targets=False): - - result = {"num_vars": self.num_vars * self.groups} - - if not self.time_first: - x = x.transpose(1, 2) - - bsz, tsz, fsz = x.shape - x = x.reshape(-1, fsz) - x = self.weight_proj(x) - x = x.view(bsz * tsz * self.groups, -1) - - _, k = x.max(-1) - hard_x = ( - x.new_zeros(*x.shape) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(bsz * tsz, self.groups, -1) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - result["code_perplexity"] = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ).sum() - - avg_probs = torch.softmax( - x.view(bsz * tsz, self.groups, -1).float(), dim=-1 - ).mean(dim=0) - result["prob_perplexity"] = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ).sum() - - result["temp"] = self.curr_temp - - if self.training: - x = F.gumbel_softmax(x.float(), tau=self.curr_temp, hard=True).type_as(x) - else: - x = hard_x - - x = x.view(bsz * tsz, -1) - - vars = self.vars - if self.combine_groups: - vars = vars.repeat(1, self.groups, 1) - - if produce_targets: - result["targets"] = ( - x.view(bsz * tsz * self.groups, -1) - .argmax(dim=-1) - .view(bsz, tsz, self.groups) - .detach() - ) - - x = x.unsqueeze(-1) * vars - x = x.view(bsz * tsz, self.groups, self.num_vars, -1) - x = x.sum(-2) - x = x.view(bsz, tsz, -1) - - if not self.time_first: - x = x.transpose(1, 2) # BTC -> BCT - - result["x"] = x - - return result diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_drop.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_drop.py deleted file mode 100644 index 8961d8bcbc492c40c6b30973234416ce5a414f5a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/layer_drop.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -LayerDrop as described in https://arxiv.org/abs/1909.11556. -""" - -import torch -import torch.nn as nn - - -class LayerDropModuleList(nn.ModuleList): - """ - A LayerDrop implementation based on :class:`torch.nn.ModuleList`. - - We refresh the choice of which layers to drop every time we iterate - over the LayerDropModuleList instance. During evaluation we always - iterate over all layers. - - Usage:: - - layers = LayerDropList(p=0.5, modules=[layer1, layer2, layer3]) - for layer in layers: # this might iterate over layers 1 and 3 - x = layer(x) - for layer in layers: # this might iterate over all layers - x = layer(x) - for layer in layers: # this might not iterate over any layers - x = layer(x) - - Args: - p (float): probability of dropping out each layer - modules (iterable, optional): an iterable of modules to add - """ - - def __init__(self, p, modules=None): - super().__init__(modules) - self.p = p - - def __iter__(self): - dropout_probs = torch.empty(len(self)).uniform_() - for i, m in enumerate(super().__iter__()): - if not self.training or (dropout_probs[i] > self.p): - yield m diff --git a/spaces/OIUGLK/bingo/src/components/ui/select.tsx b/spaces/OIUGLK/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/ORI-Muchim/BarKeYaeTTS/app.py b/spaces/ORI-Muchim/BarKeYaeTTS/app.py deleted file mode 100644 index 58a2e8f57c53fe2fa9c257063c2670f17871f63b..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BarKeYaeTTS/app.py +++ /dev/null @@ -1,165 +0,0 @@ -import json -import os -import re - -import librosa -import numpy as np -import torch -from torch import no_grad, LongTensor -import commons -import utils -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from mel_processing import spectrogram_torch - -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - - -def get_text(text, hps, is_phoneme): - text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, speed, is_phoneme): - if limitation: - text_len = len(text) - max_len = 500 - if is_phoneme: - max_len *= 3 - else: - if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners": - text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text)) - if text_len > max_len: - return "Error: Text is too long", None - - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_phoneme) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - sid = LongTensor([speaker_id]) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - - - - -def create_to_phoneme_fn(hps): - def to_phoneme_fn(text): - return _clean_text(text, hps.data.text_cleaners) if text != "" else "" - - return to_phoneme_fn - - -css = """ - #advanced-btn { - color: white; - border-color: black; - background: black; - font-size: .7rem !important; - line-height: 19px; - margin-top: 24px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } -""" - -if __name__ == '__main__': - models_tts = [] - models_vc = [] - models_soft_vc = [] - # {"title": "ハミダシクリエイティブ", "lang": "日本語 (Japanese)", "example": "こんにちは。", "type": "vits"} - name = 'BarbaraKeqingYaeMikoTTS' - lang = '한국어 (Korean)' - example = '불편하면 자세를 고쳐 앉아.' - config_path = f"saved_model/config.json" - model_path = f"saved_model/model.pth" - cover_path = f"saved_model/cover.png" - hps = utils.get_hparams_from_file(config_path) - model = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - utils.load_checkpoint(model_path, model, None) - model.eval() - speaker_ids = [sid for sid, name in enumerate(hps.speakers) if name != "None"] - speakers = [name for sid, name in enumerate(hps.speakers) if name != "None"] - - t = 'vits' - models_tts.append((name, cover_path, speakers, lang, example, - hps.symbols, create_tts_fn(model, hps, speaker_ids), - create_to_phoneme_fn(hps))) - - - app = gr.Blocks(css=css) - - with app: - gr.Markdown("# BarKeYaeTTS Using VITS Model\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ORI-Muchim.BarKeYaeTTS)\n\n") - with gr.Tabs(): - with gr.TabItem("TTS"): - with gr.Tabs(): - for i, (name, cover_path, speakers, lang, example, symbols, tts_fn, - to_phoneme_fn) in enumerate(models_tts): - with gr.TabItem(f"Genshin"): - with gr.Column(): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})\n\n" - f"lang: {lang}") - tts_input1 = gr.TextArea(label="Text (500 words limitation)", value=example, - elem_id=f"tts-input{i}") - tts_input2 = gr.Dropdown(label="Speaker", choices=speakers, - type="index", value=speakers[0]) - tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.1, maximum=2, step=0.1) - with gr.Accordion(label="Advanced Options", open=False): - phoneme_input = gr.Checkbox(value=False, label="Phoneme input") - to_phoneme_btn = gr.Button("Covert text to phoneme") - phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1], - samples=[[x] for x in symbols], - elem_id=f"phoneme-list{i}") - phoneme_list_json = gr.Json(value=symbols, visible=False) - tts_submit = gr.Button("Generate", variant="primary") - tts_output1 = gr.Textbox(label="Output Message") - tts_output2 = gr.Audio(label="Output Audio") - tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input], - [tts_output1, tts_output2]) - to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1]) - phoneme_list.click(None, [phoneme_list, phoneme_list_json], [], - _js=f""" - (i,phonemes) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input{i}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + phonemes[i].length; - text_input.selectionEnd = startPos + phonemes[i].length; - text_input.blur(); - window.scrollTo(x, y); - return []; - }}""") - - app.queue(concurrency_count=3).launch(show_api=False) diff --git a/spaces/ORI-Muchim/NahidaTTS/text/__init__.py b/spaces/ORI-Muchim/NahidaTTS/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/NahidaTTS/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Omnibus/MusicGen/audiocraft/utils/autocast.py b/spaces/Omnibus/MusicGen/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/index.html b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/index.html deleted file mode 100644 index ba7c22afa6d04d54dbabcb34ebc9fdab9d2983d2..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/index.html +++ /dev/null @@ -1,31 +0,0 @@ - - - - - - - - DI-sheep - - - -
    - - - - diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py deleted file mode 100644 index 24fe59bda44225324928542df3f2ef1745375dfd..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import torch - -from detectron2.utils.file_io import PathManager - -from .torchscript_patch import freeze_training_mode, patch_instances - -__all__ = ["scripting_with_instances", "dump_torchscript_IR"] - - -def scripting_with_instances(model, fields): - """ - Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since - attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult - for scripting to support it out of the box. This function is made to support scripting - a model that uses :class:`Instances`. It does the following: - - 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``, - but with all attributes been "static". - The attributes need to be statically declared in the ``fields`` argument. - 2. Register ``new_Instances``, and force scripting compiler to - use it when trying to compile ``Instances``. - - After this function, the process will be reverted. User should be able to script another model - using different fields. - - Example: - Assume that ``Instances`` in the model consist of two attributes named - ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and - :class:`Tensor` respectively during inference. You can call this function like: - :: - fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} - torchscipt_model = scripting_with_instances(model, fields) - - Note: - It only support models in evaluation mode. - - Args: - model (nn.Module): The input model to be exported by scripting. - fields (Dict[str, type]): Attribute names and corresponding type that - ``Instances`` will use in the model. Note that all attributes used in ``Instances`` - need to be added, regardless of whether they are inputs/outputs of the model. - Data type not defined in detectron2 is not supported for now. - - Returns: - torch.jit.ScriptModule: the model in torchscript format - """ - assert ( - not model.training - ), "Currently we only support exporting models in evaluation mode to torchscript" - - with freeze_training_mode(model), patch_instances(fields): - scripted_model = torch.jit.script(model) - return scripted_model - - -# alias for old name -export_torchscript_with_instances = scripting_with_instances - - -def dump_torchscript_IR(model, dir): - """ - Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph, - inlined graph). Useful for debugging. - - Args: - model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module - dir (str): output directory to dump files. - """ - dir = os.path.expanduser(dir) - PathManager.mkdirs(dir) - - def _get_script_mod(mod): - if isinstance(mod, torch.jit.TracedModule): - return mod._actual_script_module - return mod - - # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code - with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f: - - def get_code(mod): - # Try a few ways to get code using private attributes. - try: - # This contains more information than just `mod.code` - return _get_script_mod(mod)._c.code - except AttributeError: - pass - try: - return mod.code - except AttributeError: - return None - - def dump_code(prefix, mod): - code = get_code(mod) - name = prefix or "root model" - if code is None: - f.write(f"Could not found code for {name} (type={mod.original_name})\n") - f.write("\n") - else: - f.write(f"\nCode for {name}, type={mod.original_name}:\n") - f.write(code) - f.write("\n") - f.write("-" * 80) - - for name, m in mod.named_children(): - dump_code(prefix + "." + name, m) - - if isinstance(model, torch.jit.ScriptFunction): - f.write(get_code(model)) - else: - dump_code("", model) - - def _get_graph(model): - try: - # Recursively dump IR of all modules - return _get_script_mod(model)._c.dump_to_str(True, False, False) - except AttributeError: - return model.graph.str() - - with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f: - f.write(_get_graph(model)) - - # Dump IR of the entire graph (all submodules inlined) - with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f: - f.write(str(model.inlined_graph)) - - if not isinstance(model, torch.jit.ScriptFunction): - # Dump the model structure in pytorch style - with PathManager.open(os.path.join(dir, "model.txt"), "w") as f: - f.write(str(model)) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py deleted file mode 100644 index 290f0f07204e78ef2c4ff918aa500b04330279e6..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch -import json -import numpy as np -from torch.nn import functional as F - -def load_class_freq( - path='datasets/lvis/lvis_v1_train_cat_info.json', - freq_weight=0.5): - cat_info = json.load(open(path, 'r')) - cat_info = torch.tensor( - [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])]) - freq_weight = cat_info.float() ** freq_weight - return freq_weight - -def get_fed_loss_inds( - gt_classes, num_sample_cats=50, C=1203, \ - weight=None, fed_cls_inds=-1): - appeared = torch.unique(gt_classes) # C' - prob = appeared.new_ones(C + 1).float() - prob[-1] = 0 - if len(appeared) < num_sample_cats: - if weight is not None: - prob[:C] = weight.float().clone() - prob[appeared] = 0 - if fed_cls_inds > 0: - prob[fed_cls_inds:] = 0 - more_appeared = torch.multinomial( - prob, num_sample_cats - len(appeared), - replacement=False) - appeared = torch.cat([appeared, more_appeared]) - return appeared \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/train_net.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/train_net.py deleted file mode 100644 index 6ebf5f60a2197633fa418e83aadfedb824537b8e..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tools/train_net.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -A main training script. - -This scripts reads a given config file and runs the training or evaluation. -It is an entry point that is made to train standard models in detectron2. - -In order to let one script support training of many models, -this script contains logic that are specific to these built-in models and therefore -may not be suitable for your own project. -For example, your research project perhaps only needs a single "evaluator". - -Therefore, we recommend you to use detectron2 as an library and take -this file as an example of how to use the library. -You may want to write your own script with your datasets and other customizations. -""" - -import logging -import os -from collections import OrderedDict -import torch - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, hooks, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - LVISEvaluator, - PascalVOCDetectionEvaluator, - SemSegEvaluator, - verify_results, -) -from detectron2.modeling import GeneralizedRCNNWithTTA - - -def build_evaluator(cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each builtin dataset. - For your own dataset, you can simply create an evaluator manually in your - script and do not have to worry about the hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "coco_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type in ["coco", "coco_panoptic_seg"]: - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type == "coco_panoptic_seg": - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - assert ( - torch.cuda.device_count() > comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - assert ( - torch.cuda.device_count() > comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesSemSegEvaluator(dataset_name) - elif evaluator_type == "pascal_voc": - return PascalVOCDetectionEvaluator(dataset_name) - elif evaluator_type == "lvis": - return LVISEvaluator(dataset_name, output_dir=output_folder) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type) - ) - elif len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - -class Trainer(DefaultTrainer): - """ - We use the "DefaultTrainer" which contains pre-defined default logic for - standard training workflow. They may not work for you, especially if you - are working on a new research project. In that case you can write your - own training loop. You can use "tools/plain_train_net.py" as an example. - """ - - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - return build_evaluator(cfg, dataset_name, output_folder) - - @classmethod - def test_with_TTA(cls, cfg, model): - logger = logging.getLogger("detectron2.trainer") - # In the end of training, run an evaluation with TTA - # Only support some R-CNN models. - logger.info("Running inference with test-time augmentation ...") - model = GeneralizedRCNNWithTTA(cfg, model) - evaluators = [ - cls.build_evaluator( - cfg, name, output_folder=os.path.join(cfg.OUTPUT_DIR, "inference_TTA") - ) - for name in cfg.DATASETS.TEST - ] - res = cls.test(cfg, model, evaluators) - res = OrderedDict({k + "_TTA": v for k, v in res.items()}) - return res - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - if cfg.TEST.AUG.ENABLED: - res.update(Trainer.test_with_TTA(cfg, model)) - if comm.is_main_process(): - verify_results(cfg, res) - return res - - """ - If you'd like to do anything fancier than the standard training logic, - consider writing your own training loop (see plain_train_net.py) or - subclassing the trainer. - """ - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - if cfg.TEST.AUG.ENABLED: - trainer.register_hooks( - [hooks.EvalHook(0, lambda: trainer.test_with_TTA(cfg, trainer.model))] - ) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py deleted file mode 100644 index ff65bad1b86d7e3a5980bb5b9fc55798dc8df5f4..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/datasets/pascal_context.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psp_head.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index b5f1e71c70c3a20f4007c263ec471a87bb214a48..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/PAIR/Text2Video-Zero/style.css b/spaces/PAIR/Text2Video-Zero/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/__init__.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/comm.py b/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/PascalLiu/FNeVR_demo/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/syncase.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/syncase.go deleted file mode 100644 index 7335cb1102079cee68201de9c5f185c42197830c..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/syncase.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/utils/gradio_utils.py b/spaces/Pie31415/control-animation/utils/gradio_utils.py deleted file mode 100644 index e2c0d0881ade7690c9c60eecc5d759956768ea5b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/utils/gradio_utils.py +++ /dev/null @@ -1,30 +0,0 @@ -import os - -# App Pose utils -def motion_to_video_path(motion): - videos = [ - "__assets__/walk_01.mp4", - "__assets__/walk_02.mp4", - "__assets__/walk_03.mp4", - "__assets__/run.mp4", - "__assets__/dance1_corr.mp4", - "__assets__/dance2_corr.mp4", - "__assets__/dance3_corr.mp4", - "__assets__/dance4_corr.mp4", - "__assets__/dance5_corr.mp4", - ] - if len(motion.split(" ")) > 1 and motion.split(" ")[1].isnumeric(): - id = int(motion.split(" ")[1]) - 1 - return videos[id] - else: - return motion - -def logo_name_to_path(name): - logo_paths = { - 'Picsart AI Research': '__assets__/pair_watermark.png', - 'Text2Video-Zero': '__assets__/t2v-z_watermark.png', - 'None': None - } - if name in logo_paths: - return logo_paths[name] - return name diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py deleted file mode 100644 index c4e67e29b46ab22f6ffeec85ffc64d8b99800b1b..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/msd.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch -import torch.nn as nn - -from ...modules import NormConv1d -from .base import MultiDiscriminator, MultiDiscriminatorOutputType - - -class ScaleDiscriminator(nn.Module): - """Waveform sub-discriminator. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (Sequence[int]): Kernel sizes for first and last convolutions. - filters (int): Number of initial filters for convolutions. - max_filters (int): Maximum number of filters. - downsample_scales (Sequence[int]): Scale for downsampling implemented as strided convolutions. - inner_kernel_sizes (Sequence[int] or None): Kernel sizes for inner convolutions. - groups (Sequence[int] or None): Groups for inner convolutions. - strides (Sequence[int] or None): Strides for inner convolutions. - paddings (Sequence[int] or None): Paddings for inner convolutions. - norm (str): Normalization method. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - pad (str): Padding for initial convolution. - pad_params (dict): Parameters to provide to the padding module. - """ - def __init__(self, in_channels=1, out_channels=1, kernel_sizes: tp.Sequence[int] = [5, 3], - filters: int = 16, max_filters: int = 1024, downsample_scales: tp.Sequence[int] = [4, 4, 4, 4], - inner_kernel_sizes: tp.Optional[tp.Sequence[int]] = None, groups: tp.Optional[tp.Sequence[int]] = None, - strides: tp.Optional[tp.Sequence[int]] = None, paddings: tp.Optional[tp.Sequence[int]] = None, - norm: str = 'weight_norm', activation: str = 'LeakyReLU', - activation_params: dict = {'negative_slope': 0.2}, pad: str = 'ReflectionPad1d', - pad_params: dict = {}): - super().__init__() - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - assert (inner_kernel_sizes is None or len(inner_kernel_sizes) == len(downsample_scales)) - assert (groups is None or len(groups) == len(downsample_scales)) - assert (strides is None or len(strides) == len(downsample_scales)) - assert (paddings is None or len(paddings) == len(downsample_scales)) - self.activation = getattr(torch.nn, activation)(**activation_params) - self.convs = nn.ModuleList() - self.convs.append( - nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - NormConv1d(in_channels, filters, kernel_size=np.prod(kernel_sizes), stride=1, norm=norm) - ) - ) - - in_chs = filters - for i, downsample_scale in enumerate(downsample_scales): - out_chs = min(in_chs * downsample_scale, max_filters) - default_kernel_size = downsample_scale * 10 + 1 - default_stride = downsample_scale - default_padding = (default_kernel_size - 1) // 2 - default_groups = in_chs // 4 - self.convs.append( - NormConv1d(in_chs, out_chs, - kernel_size=inner_kernel_sizes[i] if inner_kernel_sizes else default_kernel_size, - stride=strides[i] if strides else default_stride, - groups=groups[i] if groups else default_groups, - padding=paddings[i] if paddings else default_padding, - norm=norm)) - in_chs = out_chs - - out_chs = min(in_chs * 2, max_filters) - self.convs.append(NormConv1d(in_chs, out_chs, kernel_size=kernel_sizes[0], stride=1, - padding=(kernel_sizes[0] - 1) // 2, norm=norm)) - self.conv_post = NormConv1d(out_chs, out_channels, kernel_size=kernel_sizes[1], stride=1, - padding=(kernel_sizes[1] - 1) // 2, norm=norm) - - def forward(self, x: torch.Tensor): - fmap = [] - for layer in self.convs: - x = layer(x) - x = self.activation(x) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - # x = torch.flatten(x, 1, -1) - return x, fmap - - -class MultiScaleDiscriminator(MultiDiscriminator): - """Multi-Scale (MSD) Discriminator, - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_factor (int): Downsampling factor between the different scales. - scale_norms (Sequence[str]): Normalization for each sub-discriminator. - **kwargs: Additional args for ScaleDiscriminator. - """ - def __init__(self, in_channels: int = 1, out_channels: int = 1, downsample_factor: int = 2, - scale_norms: tp.Sequence[str] = ['weight_norm', 'weight_norm', 'weight_norm'], **kwargs): - super().__init__() - self.discriminators = nn.ModuleList([ - ScaleDiscriminator(in_channels, out_channels, norm=norm, **kwargs) for norm in scale_norms - ]) - self.downsample = nn.AvgPool1d(downsample_factor * 2, downsample_factor, padding=downsample_factor) - - @property - def num_discriminators(self): - return len(self.discriminators) - - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - logits = [] - fmaps = [] - for i, disc in enumerate(self.discriminators): - if i != 0: - self.downsample(x) - logit, fmap = disc(x) - logits.append(logit) - fmaps.append(fmap) - return logits, fmaps diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/modules/diffusionmodules/util.py b/spaces/Purple11/Grounded-Diffusion/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index a952e6c40308c33edd422da0ce6a60f47e73661b..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,267 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/Rakot2223/faster-whisper-webui/src/conversion/hf_converter.py b/spaces/Rakot2223/faster-whisper-webui/src/conversion/hf_converter.py deleted file mode 100644 index 6da4f0fd672d63b099f21d0498ba4001d23356f7..0000000000000000000000000000000000000000 --- a/spaces/Rakot2223/faster-whisper-webui/src/conversion/hf_converter.py +++ /dev/null @@ -1,67 +0,0 @@ -# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets - -from copy import deepcopy -import torch - -WHISPER_MAPPING = { - "layers": "blocks", - "fc1": "mlp.0", - "fc2": "mlp.2", - "final_layer_norm": "mlp_ln", - "layers": "blocks", - ".self_attn.q_proj": ".attn.query", - ".self_attn.k_proj": ".attn.key", - ".self_attn.v_proj": ".attn.value", - ".self_attn_layer_norm": ".attn_ln", - ".self_attn.out_proj": ".attn.out", - ".encoder_attn.q_proj": ".cross_attn.query", - ".encoder_attn.k_proj": ".cross_attn.key", - ".encoder_attn.v_proj": ".cross_attn.value", - ".encoder_attn_layer_norm": ".cross_attn_ln", - ".encoder_attn.out_proj": ".cross_attn.out", - "decoder.layer_norm.": "decoder.ln.", - "encoder.layer_norm.": "encoder.ln_post.", - "embed_tokens": "token_embedding", - "encoder.embed_positions.weight": "encoder.positional_embedding", - "decoder.embed_positions.weight": "decoder.positional_embedding", - "layer_norm": "ln_post", -} - - -def rename_keys(s_dict): - keys = list(s_dict.keys()) - for key in keys: - new_key = key - for k, v in WHISPER_MAPPING.items(): - if k in key: - new_key = new_key.replace(k, v) - - print(f"{key} -> {new_key}") - - s_dict[new_key] = s_dict.pop(key) - return s_dict - - -def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str): - from transformers import WhisperForConditionalGeneration - transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path) - config = transformer_model.config - - # first build dims - dims = { - 'n_mels': config.num_mel_bins, - 'n_vocab': config.vocab_size, - 'n_audio_ctx': config.max_source_positions, - 'n_audio_state': config.d_model, - 'n_audio_head': config.encoder_attention_heads, - 'n_audio_layer': config.encoder_layers, - 'n_text_ctx': config.max_target_positions, - 'n_text_state': config.d_model, - 'n_text_head': config.decoder_attention_heads, - 'n_text_layer': config.decoder_layers - } - - state_dict = deepcopy(transformer_model.model.state_dict()) - state_dict = rename_keys(state_dict) - - torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path) \ No newline at end of file diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/models/diffusion/classifier.py b/spaces/RamAnanth1/T2I-Adapter/ldm/models/diffusion/classifier.py deleted file mode 100644 index 67e98b9d8ffb96a150b517497ace0a242d7163ef..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/ldm/models/diffusion/classifier.py +++ /dev/null @@ -1,267 +0,0 @@ -import os -import torch -import pytorch_lightning as pl -from omegaconf import OmegaConf -from torch.nn import functional as F -from torch.optim import AdamW -from torch.optim.lr_scheduler import LambdaLR -from copy import deepcopy -from einops import rearrange -from glob import glob -from natsort import natsorted - -from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel -from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config - -__models__ = { - 'class_label': EncoderUNetModel, - 'segmentation': UNetModel -} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class NoisyLatentImageClassifier(pl.LightningModule): - - def __init__(self, - diffusion_path, - num_classes, - ckpt_path=None, - pool='attention', - label_key=None, - diffusion_ckpt_path=None, - scheduler_config=None, - weight_decay=1.e-2, - log_steps=10, - monitor='val/loss', - *args, - **kwargs): - super().__init__(*args, **kwargs) - self.num_classes = num_classes - # get latest config of diffusion model - diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1] - self.diffusion_config = OmegaConf.load(diffusion_config).model - self.diffusion_config.params.ckpt_path = diffusion_ckpt_path - self.load_diffusion() - - self.monitor = monitor - self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1 - self.log_time_interval = self.diffusion_model.num_timesteps // log_steps - self.log_steps = log_steps - - self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \ - else self.diffusion_model.cond_stage_key - - assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params' - - if self.label_key not in __models__: - raise NotImplementedError() - - self.load_classifier(ckpt_path, pool) - - self.scheduler_config = scheduler_config - self.use_scheduler = self.scheduler_config is not None - self.weight_decay = weight_decay - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def load_diffusion(self): - model = instantiate_from_config(self.diffusion_config) - self.diffusion_model = model.eval() - self.diffusion_model.train = disabled_train - for param in self.diffusion_model.parameters(): - param.requires_grad = False - - def load_classifier(self, ckpt_path, pool): - model_config = deepcopy(self.diffusion_config.params.unet_config.params) - model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels - model_config.out_channels = self.num_classes - if self.label_key == 'class_label': - model_config.pool = pool - - self.model = __models__[self.label_key](**model_config) - if ckpt_path is not None: - print('#####################################################################') - print(f'load from ckpt "{ckpt_path}"') - print('#####################################################################') - self.init_from_ckpt(ckpt_path) - - @torch.no_grad() - def get_x_noisy(self, x, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x)) - continuous_sqrt_alpha_cumprod = None - if self.diffusion_model.use_continuous_noise: - continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1) - # todo: make sure t+1 is correct here - - return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise, - continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod) - - def forward(self, x_noisy, t, *args, **kwargs): - return self.model(x_noisy, t) - - @torch.no_grad() - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - @torch.no_grad() - def get_conditioning(self, batch, k=None): - if k is None: - k = self.label_key - assert k is not None, 'Needs to provide label key' - - targets = batch[k].to(self.device) - - if self.label_key == 'segmentation': - targets = rearrange(targets, 'b h w c -> b c h w') - for down in range(self.numd): - h, w = targets.shape[-2:] - targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest') - - # targets = rearrange(targets,'b c h w -> b h w c') - - return targets - - def compute_top_k(self, logits, labels, k, reduction="mean"): - _, top_ks = torch.topk(logits, k, dim=1) - if reduction == "mean": - return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item() - elif reduction == "none": - return (top_ks == labels[:, None]).float().sum(dim=-1) - - def on_train_epoch_start(self): - # save some memory - self.diffusion_model.model.to('cpu') - - @torch.no_grad() - def write_logs(self, loss, logits, targets): - log_prefix = 'train' if self.training else 'val' - log = {} - log[f"{log_prefix}/loss"] = loss.mean() - log[f"{log_prefix}/acc@1"] = self.compute_top_k( - logits, targets, k=1, reduction="mean" - ) - log[f"{log_prefix}/acc@5"] = self.compute_top_k( - logits, targets, k=5, reduction="mean" - ) - - self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True) - self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False) - self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True) - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True) - - def shared_step(self, batch, t=None): - x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key) - targets = self.get_conditioning(batch) - if targets.dim() == 4: - targets = targets.argmax(dim=1) - if t is None: - t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long() - else: - t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long() - x_noisy = self.get_x_noisy(x, t) - logits = self(x_noisy, t) - - loss = F.cross_entropy(logits, targets, reduction='none') - - self.write_logs(loss.detach(), logits.detach(), targets.detach()) - - loss = loss.mean() - return loss, logits, x_noisy, targets - - def training_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - return loss - - def reset_noise_accs(self): - self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in - range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)} - - def on_validation_start(self): - self.reset_noise_accs() - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - loss, *_ = self.shared_step(batch) - - for t in self.noisy_acc: - _, logits, _, targets = self.shared_step(batch, t) - self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean')) - self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean')) - - return loss - - def configure_optimizers(self): - optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay) - - if self.use_scheduler: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [optimizer], scheduler - - return optimizer - - @torch.no_grad() - def log_images(self, batch, N=8, *args, **kwargs): - log = dict() - x = self.get_input(batch, self.diffusion_model.first_stage_key) - log['inputs'] = x - - y = self.get_conditioning(batch) - - if self.label_key == 'class_label': - y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['labels'] = y - - if ismap(y): - log['labels'] = self.diffusion_model.to_rgb(y) - - for step in range(self.log_steps): - current_time = step * self.log_time_interval - - _, logits, x_noisy, _ = self.shared_step(batch, t=current_time) - - log[f'inputs@t{current_time}'] = x_noisy - - pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes) - pred = rearrange(pred, 'b h w c -> b c h w') - - log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred) - - for key in log: - log[key] = log[key][:N] - - return log diff --git a/spaces/RamV/ChatRobo/README.md b/spaces/RamV/ChatRobo/README.md deleted file mode 100644 index d7aa88ef22feca1141ab88b6fd8176e4417049b9..0000000000000000000000000000000000000000 --- a/spaces/RamV/ChatRobo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatRobo -emoji: 👀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py deleted file mode 100644 index 7dc9bc53360e95abfa99fe1ebd205a3d3ac620e6..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/_internal_utils.py +++ /dev/null @@ -1,48 +0,0 @@ -""" -requests._internal_utils -~~~~~~~~~~~~~~ - -Provides utility functions that are consumed internally by Requests -which depend on extremely few external helpers (such as compat) -""" -import re - -from .compat import builtin_str - -_VALID_HEADER_NAME_RE_BYTE = re.compile(rb"^[^:\s][^:\r\n]*$") -_VALID_HEADER_NAME_RE_STR = re.compile(r"^[^:\s][^:\r\n]*$") -_VALID_HEADER_VALUE_RE_BYTE = re.compile(rb"^\S[^\r\n]*$|^$") -_VALID_HEADER_VALUE_RE_STR = re.compile(r"^\S[^\r\n]*$|^$") - -HEADER_VALIDATORS = { - bytes: (_VALID_HEADER_NAME_RE_BYTE, _VALID_HEADER_VALUE_RE_BYTE), - str: (_VALID_HEADER_NAME_RE_STR, _VALID_HEADER_VALUE_RE_STR), -} - - -def to_native_string(string, encoding="ascii"): - """Given a string object, regardless of type, returns a representation of - that string in the native string type, encoding and decoding where - necessary. This assumes ASCII unless told otherwise. - """ - if isinstance(string, builtin_str): - out = string - else: - out = string.decode(encoding) - - return out - - -def unicode_is_ascii(u_string): - """Determine if unicode string only contains ASCII characters. - - :param str u_string: unicode string to check. Must be unicode - and not Python 2 `str`. - :rtype: bool - """ - assert isinstance(u_string, str) - try: - u_string.encode("ascii") - return True - except UnicodeEncodeError: - return False diff --git a/spaces/Ravanan007/my1projectAi/app.py b/spaces/Ravanan007/my1projectAi/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Ravanan007/my1projectAi/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/setup.py b/spaces/Realcat/image-matching-webui/third_party/Roma/setup.py deleted file mode 100644 index ae777c0e5a41f0e4b03a838d19bc9a2bb04d4617..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from setuptools import setup - -setup( - name="roma", - packages=["roma"], - version="0.0.1", - author="Johan Edstedt", - install_requires=open("requirements.txt", "r").read().split("\n"), -) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/ohem_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/ohem_sampler.py deleted file mode 100644 index 8b99f60ef0176f1b7a56665fb0f59272f65b84cd..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/ohem_sampler.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class OHEMSampler(BaseSampler): - r"""Online Hard Example Mining Sampler described in `Training Region-based - Object Detectors with Online Hard Example Mining - `_. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.context = context - if not hasattr(self.context, 'num_stages'): - self.bbox_head = self.context.bbox_head - else: - self.bbox_head = self.context.bbox_head[self.context.current_stage] - - def hard_mining(self, inds, num_expected, bboxes, labels, feats): - with torch.no_grad(): - rois = bbox2roi([bboxes]) - if not hasattr(self.context, 'num_stages'): - bbox_results = self.context._bbox_forward(feats, rois) - else: - bbox_results = self.context._bbox_forward( - self.context.current_stage, feats, rois) - cls_score = bbox_results['cls_score'] - loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=rois, - labels=labels, - label_weights=cls_score.new_ones(cls_score.size(0)), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - _, topk_loss_inds = loss.topk(num_expected) - return inds[topk_loss_inds] - - def _sample_pos(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected positive samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of positive samples - """ - # Sample some hard positive samples - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds], - assign_result.labels[pos_inds], feats) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected negative samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of negative samples - """ - # Sample some hard negative samples - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - neg_labels = assign_result.labels.new_empty( - neg_inds.size(0)).fill_(self.bbox_head.num_classes) - return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds], - neg_labels, feats) diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/models/getModels.sh b/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/models/getModels.sh deleted file mode 100644 index 63aef4d552905b312b95f2a4e367067f28eba3d7..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/openpose/models/getModels.sh +++ /dev/null @@ -1,36 +0,0 @@ -# ------------------------- BODY, FOOT, FACE, AND HAND MODELS ------------------------- -# Downloading body pose (COCO and MPI), face and hand models -OPENPOSE_URL="http://posefs1.perception.cs.cmu.edu/OpenPose/models/" -POSE_FOLDER="pose/" -FACE_FOLDER="face/" -HAND_FOLDER="hand/" - -# ------------------------- POSE (BODY+FOOT) MODELS ------------------------- -# Body (BODY_25) -BODY_25_FOLDER=${POSE_FOLDER}"body_25/" -BODY_25_MODEL=${BODY_25_FOLDER}"pose_iter_584000.caffemodel" -wget -c ${OPENPOSE_URL}${BODY_25_MODEL} -P ${BODY_25_FOLDER} - -# Body (COCO) -COCO_FOLDER=${POSE_FOLDER}"coco/" -COCO_MODEL=${COCO_FOLDER}"pose_iter_440000.caffemodel" -wget -c ${OPENPOSE_URL}${COCO_MODEL} -P ${COCO_FOLDER} -# Alternative: it will not check whether file was fully downloaded -# if [ ! -f $COCO_MODEL ]; then -# wget ${OPENPOSE_URL}$COCO_MODEL -P $COCO_FOLDER -# fi - -# Body (MPI) -MPI_FOLDER=${POSE_FOLDER}"mpi/" -MPI_MODEL=${MPI_FOLDER}"pose_iter_160000.caffemodel" -wget -c ${OPENPOSE_URL}${MPI_MODEL} -P ${MPI_FOLDER} - -# "------------------------- FACE MODELS -------------------------" -# Face -FACE_MODEL=${FACE_FOLDER}"pose_iter_116000.caffemodel" -wget -c ${OPENPOSE_URL}${FACE_MODEL} -P ${FACE_FOLDER} - -# "------------------------- HAND MODELS -------------------------" -# Hand -HAND_MODEL=$HAND_FOLDER"pose_iter_102000.caffemodel" -wget -c ${OPENPOSE_URL}${HAND_MODEL} -P ${HAND_FOLDER} diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/README.md b/spaces/Sky5408er/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/SoulAbi/whisper-audio-text-speaker-recognition/README.md b/spaces/SoulAbi/whisper-audio-text-speaker-recognition/README.md deleted file mode 100644 index 2d5eea19bd4a9ac8b7667694a77b3719532b62ee..0000000000000000000000000000000000000000 --- a/spaces/SoulAbi/whisper-audio-text-speaker-recognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper Audio to Text with Speaker Recognition -emoji: 🌖 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/app.py b/spaces/SumDimDimSum/yulet1de-hentaidiffusion/app.py deleted file mode 100644 index edf0803cbdf9a26a10899d5021088c3d80eec76d..0000000000000000000000000000000000000000 --- a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/yulet1de/hentaidiffusion").launch() \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/daemonize.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/daemonize.py deleted file mode 100644 index 44b4a2832e0889d31fcc4cc36f18895d4b3a86ba..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/daemonize.py +++ /dev/null @@ -1,4 +0,0 @@ -from warnings import warn - -warn("IPython.utils.daemonize has moved to ipyparallel.apps.daemonize since IPython 4.0", DeprecationWarning, stacklevel=2) -from ipyparallel.apps.daemonize import daemonize diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_always_live_program.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_always_live_program.py deleted file mode 100644 index 6369508ededa6802a5e74847046f2b7bd0f82565..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_always_live_program.py +++ /dev/null @@ -1,32 +0,0 @@ -import sys -import struct -print('Executable: %s' % sys.executable) -import os -def loop_in_thread(): - while True: - import time - time.sleep(.5) - sys.stdout.write('#') - sys.stdout.flush() - -import threading -threading.Thread(target=loop_in_thread).start() - - -def is_python_64bit(): - return (struct.calcsize('P') == 8) - -print('Is 64: %s' % is_python_64bit()) - -if __name__ == '__main__': - print('pid:%s' % (os.getpid())) - i = 0 - while True: - i += 1 - import time - time.sleep(.5) - sys.stdout.write('.') - sys.stdout.flush() - if i % 40 == 0: - sys.stdout.write('\n') - sys.stdout.flush() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/compile_windows.bat b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/compile_windows.bat deleted file mode 100644 index 0dafd06f2d30b6ac41c71cc9bfe4423beee7c91d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/windows/compile_windows.bat +++ /dev/null @@ -1,39 +0,0 @@ -@set VSWHERE=%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe -@echo Using vswhere at %VSWHERE% -@for /f "usebackq tokens=*" %%i in (`"%VSWHERE%" -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath`) do set VSDIR=%%i -@echo Using Visual C++ at %VSDIR% - -call "%VSDIR%\VC\Auxiliary\Build\vcvarsall.bat" x86 -vcvars_spectre_libs=spectre - -cl -DUNICODE -D_UNICODE /EHsc /Zi /O1 /W3 /LD /MD /Qspectre attach.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:attach_x86.dll -copy attach_x86.dll ..\attach_x86.dll /Y -copy attach_x86.pdb ..\attach_x86.pdb /Y - -cl -DUNICODE -D_UNICODE /EHsc /Zi /O1 /W3 /LD /MD /D BITS_32 /Qspectre run_code_on_dllmain.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:run_code_on_dllmain_x86.dll -copy run_code_on_dllmain_x86.dll ..\run_code_on_dllmain_x86.dll /Y -copy run_code_on_dllmain_x86.pdb ..\run_code_on_dllmain_x86.pdb /Y - -cl /EHsc /Zi /O1 /W3 /Qspectre inject_dll.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:inject_dll_x86.exe -copy inject_dll_x86.exe ..\inject_dll_x86.exe /Y -copy inject_dll_x86.pdb ..\inject_dll_x86.pdb /Y - -call "%VSDIR%\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64 -vcvars_spectre_libs=spectre - -cl -DUNICODE -D_UNICODE /EHsc /Zi /O1 /W3 /LD /MD /Qspectre attach.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:attach_amd64.dll -copy attach_amd64.dll ..\attach_amd64.dll /Y -copy attach_amd64.pdb ..\attach_amd64.pdb /Y - -cl -DUNICODE -D_UNICODE /EHsc /Zi /O1 /W3 /LD /MD /D BITS_64 /Qspectre run_code_on_dllmain.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:run_code_on_dllmain_amd64.dll -copy run_code_on_dllmain_amd64.dll ..\run_code_on_dllmain_amd64.dll /Y -copy run_code_on_dllmain_amd64.pdb ..\run_code_on_dllmain_amd64.pdb /Y - -cl /EHsc /Zi /O1 /W3 /Qspectre inject_dll.cpp /link /DEBUG /OPT:REF /OPT:ICF /GUARD:CF /out:inject_dll_amd64.exe -copy inject_dll_amd64.exe ..\inject_dll_amd64.exe /Y -copy inject_dll_amd64.pdb ..\inject_dll_amd64.pdb /Y - -del *.exe -del *.lib -del *.obj -del *.pdb -del *.dll -del *.exp \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/abstract_tensor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/abstract_tensor.py deleted file mode 100644 index 166c4539ba00ba8cf4f98ce409d98a563a88a9e3..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/abstract_tensor.py +++ /dev/null @@ -1,339 +0,0 @@ -import abc -import warnings -from abc import ABC -from functools import reduce -from operator import mul -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Generic, - List, - Optional, - Sized, - Tuple, - Type, - TypeVar, - Union, - cast, -) - -import numpy as np - -from docarray.base_doc.io.json import orjson_dumps -from docarray.computation import AbstractComputationalBackend -from docarray.typing.abstract_type import AbstractType - -if TYPE_CHECKING: - from pydantic import BaseConfig - from pydantic.fields import ModelField - - from docarray.proto import NdArrayProto, NodeProto - -T = TypeVar('T', bound='AbstractTensor') -TTensor = TypeVar('TTensor') -ShapeT = TypeVar('ShapeT') - - -# displaying tensors that are too large causes problems in the browser -DISPLAY_TENSOR_OPENAPI_MAX_ITEMS = 256 - - -class _ParametrizedMeta(type): - """ - This metaclass ensures that instance, subclass and equality checks on parametrized Tensors - are handled as expected: - - assert issubclass(TorchTensor[128], TorchTensor[128]) - t = parse_obj_as(TorchTensor[128], torch.zeros(128)) - assert isinstance(t, TorchTensor[128]) - TorchTensor[128] == TorchTensor[128] - hash(TorchTensor[128]) == hash(TorchTensor[128]) - etc. - - This special handling is needed because every call to `AbstractTensor.__getitem__` - creates a new class on the fly. - We want technically distinct but identical classes to be considered equal. - """ - - def _equals_special_case(cls, other): - is_type = isinstance(other, type) - is_tensor = is_type and AbstractTensor in other.mro() - same_parents = is_tensor and cls.mro()[1:] == other.mro()[1:] - - subclass_target_shape = getattr(other, '__docarray_target_shape__', False) - self_target_shape = getattr(cls, '__docarray_target_shape__', False) - same_shape = ( - same_parents - and subclass_target_shape - and self_target_shape - and subclass_target_shape == self_target_shape - ) - - return same_shape - - def __subclasscheck__(cls, subclass): - if cls._equals_special_case(subclass): - return True - return super().__subclasscheck__(subclass) - - def __instancecheck__(cls, instance): - is_tensor = isinstance(instance, AbstractTensor) - if is_tensor: # custom handling - _cls = cast(Type[AbstractTensor], cls) - if ( - _cls.__unparametrizedcls__ - ): # This is not None if the tensor is parametrized - if ( - instance.get_comp_backend().shape(instance) - != _cls.__docarray_target_shape__ - ): - return False - return any( - issubclass(candidate, _cls.__unparametrizedcls__) - for candidate in type(instance).mro() - ) - return any(issubclass(candidate, cls) for candidate in type(instance).mro()) - return super().__instancecheck__(instance) - - def __eq__(cls, other): - if cls._equals_special_case(other): - return True - return NotImplemented - - def __hash__(cls): - try: - cls_ = cast(AbstractTensor, cls) - return hash((cls_.__docarray_target_shape__, cls_.__unparametrizedcls__)) - except AttributeError: - raise NotImplementedError( - '`hash()` is not implemented for this class. The `_ParametrizedMeta` ' - 'metaclass should only be used for `AbstractTensor` subclasses. ' - 'Otherwise, you have to implement `__hash__` for your class yourself.' - ) - - -class AbstractTensor(Generic[TTensor, T], AbstractType, ABC, Sized): - __parametrized_meta__: type = _ParametrizedMeta - __unparametrizedcls__: Optional[Type['AbstractTensor']] = None - __docarray_target_shape__: Optional[Tuple[int, ...]] = None - _proto_type_name: str - - def _to_node_protobuf(self: T) -> 'NodeProto': - """Convert itself into a NodeProto protobuf message. This function should - be called when the Document is nested into another Document that need to be - converted into a protobuf - :return: the nested item protobuf message - """ - from docarray.proto import NodeProto - - nd_proto = self.to_protobuf() - return NodeProto(ndarray=nd_proto, type=self._proto_type_name) - - @classmethod - def __docarray_validate_shape__(cls, t: T, shape: Tuple[Union[int, str], ...]) -> T: - """Every tensor has to implement this method in order to - enable syntax of the form AnyTensor[shape]. - It is called when a tensor is assigned to a field of this type. - i.e. when a tensor is passed to a Document field of type AnyTensor[shape]. - - The intended behaviour is as follows: - - - If the shape of `t` is equal to `shape`, return `t`. - - If the shape of `t` is not equal to `shape`, - but can be reshaped to `shape`, return `t` reshaped to `shape`. - - If the shape of `t` is not equal to `shape` - and cannot be reshaped to `shape`, raise a ValueError. - - :param t: The tensor to validate. - :param shape: The shape to validate against. - :return: The validated tensor. - """ - comp_be = t.get_comp_backend() - tshape = comp_be.shape(t) - if tshape == shape: - return t - elif any(isinstance(dim, str) or dim == Ellipsis for dim in shape): - ellipsis_occurrences = [ - pos for pos, dim in enumerate(shape) if dim == Ellipsis - ] - if ellipsis_occurrences: - if len(ellipsis_occurrences) > 1: - raise ValueError( - f'Cannot use Ellipsis (...) more than once for the shape {shape}' - ) - ellipsis_pos = ellipsis_occurrences[0] - # Calculate how many dimensions to add. Should be at least 1. - dimensions_needed = max(len(tshape) - len(shape) + 1, 1) - shape = ( - shape[:ellipsis_pos] - + tuple( - f'__dim_var_{index}__' for index in range(dimensions_needed) - ) - + shape[ellipsis_pos + 1 :] - ) - - if len(tshape) != len(shape): - raise ValueError( - f'Tensor shape mismatch. Expected {shape}, got {tshape}' - ) - known_dims: Dict[str, int] = {} - for tdim, dim in zip(tshape, shape): - if isinstance(dim, int) and tdim != dim: - raise ValueError( - f'Tensor shape mismatch. Expected {shape}, got {tshape}' - ) - elif isinstance(dim, str): - if dim in known_dims and known_dims[dim] != tdim: - raise ValueError( - f'Tensor shape mismatch. Expected {shape}, got {tshape}' - ) - else: - known_dims[dim] = tdim - else: - return t - else: - shape = cast(Tuple[int], shape) - warnings.warn( - f'Tensor shape mismatch. Reshaping tensor ' - f'of shape {tshape} to shape {shape}' - ) - try: - value = cls._docarray_from_native(comp_be.reshape(t, shape)) - return cast(T, value) - except RuntimeError: - raise ValueError( - f'Cannot reshape tensor of shape {tshape} to shape {shape}' - ) - - @classmethod - def __docarray_validate_getitem__(cls, item: Any) -> Tuple[int]: - """This method validates the input to `AbstractTensor.__class_getitem__`. - - It is called at "class creation time", - i.e. when a class is created with syntax of the form AnyTensor[shape]. - - The default implementation tries to cast any `item` to a tuple of ints. - A subclass can override this method to implement custom validation logic. - - The output of this is eventually passed to - [`AbstractTensor.__docarray_validate_shape__`] - [docarray.typing.tensor.abstract_tensor.AbstractTensor.__docarray_validate_shape__] - as its `shape` argument. - - Raises `ValueError` if the input `item` does not pass validation. - - :param item: The item to validate, passed to `__class_getitem__` (`Tensor[item]`). - :return: The validated item == the target shape of this tensor. - """ - if isinstance(item, int): - item = (item,) - try: - item = tuple(item) - except TypeError: - raise TypeError(f'{item} is not a valid tensor shape.') - return item - - @classmethod - def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None: - field_schema.update(type='array', items={'type': 'number'}) - if cls.__docarray_target_shape__ is not None: - shape_info = ( - '[' + ', '.join([str(s) for s in cls.__docarray_target_shape__]) + ']' - ) - if ( - reduce(mul, cls.__docarray_target_shape__, 1) - <= DISPLAY_TENSOR_OPENAPI_MAX_ITEMS - ): - # custom example only for 'small' shapes, otherwise it is too big to display - example_payload = orjson_dumps(np.zeros(cls.__docarray_target_shape__)) - field_schema.update(example=example_payload) - else: - shape_info = 'not specified' - field_schema['tensor/array shape'] = shape_info - - @classmethod - def _docarray_create_parametrized_type(cls: Type[T], shape: Tuple[int]): - shape_str = ', '.join([str(s) for s in shape]) - - class _ParametrizedTensor( - cls, # type: ignore - metaclass=cls.__parametrized_meta__, # type: ignore - ): - __unparametrizedcls__ = cls - __docarray_target_shape__ = shape - - @classmethod - def validate( - _cls, - value: Any, - field: 'ModelField', - config: 'BaseConfig', - ): - t = super().validate(value, field, config) - return _cls.__docarray_validate_shape__( - t, _cls.__docarray_target_shape__ - ) - - _ParametrizedTensor.__name__ = f'{cls.__name__}[{shape_str}]' - _ParametrizedTensor.__qualname__ = f'{cls.__qualname__}[{shape_str}]' - - return _ParametrizedTensor - - def __class_getitem__(cls, item: Any): - target_shape = cls.__docarray_validate_getitem__(item) - return cls._docarray_create_parametrized_type(target_shape) - - @classmethod - def _docarray_stack(cls: Type[T], seq: Union[List[T], Tuple[T]]) -> T: - """Stack a sequence of tensors into a single tensor.""" - comp_backend = cls.get_comp_backend() - # at runtime, 'T' is always the correct input type for .stack() - # but mypy doesn't know that, so we ignore it here - return cls._docarray_from_native(comp_backend.stack(seq)) # type: ignore - - @classmethod - @abc.abstractmethod - def _docarray_from_native(cls: Type[T], value: Any) -> T: - """ - Create a DocList tensor from a tensor that is native to the given framework, - e.g. from numpy.ndarray or torch.Tensor. - """ - ... - - @staticmethod - @abc.abstractmethod - def get_comp_backend() -> AbstractComputationalBackend: - """The computational backend compatible with this tensor type.""" - ... - - @abc.abstractmethod - def __getitem__(self: T, item) -> T: - """Get a slice of this tensor.""" - ... - - @abc.abstractmethod - def __setitem__(self, index, value): - """Set a slice of this tensor.""" - ... - - @abc.abstractmethod - def __iter__(self): - """Iterate over the elements of this tensor.""" - ... - - @abc.abstractmethod - def to_protobuf(self) -> 'NdArrayProto': - """Convert DocList into a Protobuf message""" - ... - - def unwrap(self): - """Return the native tensor object that this DocList tensor wraps.""" - - @abc.abstractmethod - def _docarray_to_json_compatible(self): - """ - Convert tensor into a json compatible object - :return: a representation of the tensor compatible with orjson - """ - return self diff --git a/spaces/Suniilkumaar/MusicGen-updated/tests/__init__.py b/spaces/Suniilkumaar/MusicGen-updated/tests/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/tests/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Suniilkumaar/SwapMukham/upscaler/__init__.py b/spaces/Suniilkumaar/SwapMukham/upscaler/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/nms_rotated/nms_rotated.h deleted file mode 100644 index 12aca388e47b12dafd20999f2991a9d42f4b904b..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/layers/csrc/nms_rotated/nms_rotated.h +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor nms_rotated_cpu( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor nms_rotated_cuda( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -inline at::Tensor nms_rotated( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return nms_rotated_cuda( - dets.contiguous(), scores.contiguous(), iou_threshold); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - - return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold); -} - -} // namespace detectron2 diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/data/audio.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/data/audio.py deleted file mode 100644 index 05fa53ae8ad1b40ab8b9c5dd134227a2a58c55fe..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/data/audio.py +++ /dev/null @@ -1,217 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio, convert_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True, channels:int = 1) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'. - when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - if channels > 1: - wav = convert_audio(wav,sample_rate, sample_rate, channels) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/base.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/base.py deleted file mode 100644 index 3f9f896e632e929a63e9724ab80ecdfc9761b795..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/locations/base.py +++ /dev/null @@ -1,81 +0,0 @@ -import functools -import os -import site -import sys -import sysconfig -import typing - -from pip._internal.exceptions import InstallationError -from pip._internal.utils import appdirs -from pip._internal.utils.virtualenv import running_under_virtualenv - -# Application Directories -USER_CACHE_DIR = appdirs.user_cache_dir("pip") - -# FIXME doesn't account for venv linked to global site-packages -site_packages: str = sysconfig.get_path("purelib") - - -def get_major_minor_version() -> str: - """ - Return the major-minor version of the current Python as a string, e.g. - "3.7" or "3.10". - """ - return "{}.{}".format(*sys.version_info) - - -def change_root(new_root: str, pathname: str) -> str: - """Return 'pathname' with 'new_root' prepended. - - If 'pathname' is relative, this is equivalent to os.path.join(new_root, pathname). - Otherwise, it requires making 'pathname' relative and then joining the - two, which is tricky on DOS/Windows and Mac OS. - - This is borrowed from Python's standard library's distutils module. - """ - if os.name == "posix": - if not os.path.isabs(pathname): - return os.path.join(new_root, pathname) - else: - return os.path.join(new_root, pathname[1:]) - - elif os.name == "nt": - (drive, path) = os.path.splitdrive(pathname) - if path[0] == "\\": - path = path[1:] - return os.path.join(new_root, path) - - else: - raise InstallationError( - f"Unknown platform: {os.name}\n" - "Can not change root path prefix on unknown platform." - ) - - -def get_src_prefix() -> str: - if running_under_virtualenv(): - src_prefix = os.path.join(sys.prefix, "src") - else: - # FIXME: keep src in cwd for now (it is not a temporary folder) - try: - src_prefix = os.path.join(os.getcwd(), "src") - except OSError: - # In case the current working directory has been renamed or deleted - sys.exit("The folder you are executing pip from can no longer be found.") - - # under macOS + virtualenv sys.prefix is not properly resolved - # it is something like /path/to/python/bin/.. - return os.path.abspath(src_prefix) - - -try: - # Use getusersitepackages if this is present, as it ensures that the - # value is initialised properly. - user_site: typing.Optional[str] = site.getusersitepackages() -except AttributeError: - user_site = site.USER_SITE - - -@functools.lru_cache(maxsize=None) -def is_osx_framework() -> bool: - return bool(sysconfig.get_config_var("PYTHONFRAMEWORK")) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/tags.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/tags.py deleted file mode 100644 index 9a3d25a71c75c975291cf987001ecd6882d6417d..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/tags.py +++ /dev/null @@ -1,487 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import logging -import platform -import sys -import sysconfig -from importlib.machinery import EXTENSION_SUFFIXES -from typing import ( - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - cast, -) - -from . import _manylinux, _musllinux - -logger = logging.getLogger(__name__) - -PythonVersion = Sequence[int] -MacVersion = Tuple[int, int] - -INTERPRETER_SHORT_NAMES: Dict[str, str] = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} - - -_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32 - - -class Tag: - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform", "_hash"] - - def __init__(self, interpreter: str, abi: str, platform: str) -> None: - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - # The __hash__ of every single element in a Set[Tag] will be evaluated each time - # that a set calls its `.disjoint()` method, which may be called hundreds of - # times when scanning a page of links for packages with tags matching that - # Set[Tag]. Pre-computing the value here produces significant speedups for - # downstream consumers. - self._hash = hash((self._interpreter, self._abi, self._platform)) - - @property - def interpreter(self) -> str: - return self._interpreter - - @property - def abi(self) -> str: - return self._abi - - @property - def platform(self) -> str: - return self._platform - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self._hash == other._hash) # Short-circuit ASAP for perf reasons. - and (self._platform == other._platform) - and (self._abi == other._abi) - and (self._interpreter == other._interpreter) - ) - - def __hash__(self) -> int: - return self._hash - - def __str__(self) -> str: - return f"{self._interpreter}-{self._abi}-{self._platform}" - - def __repr__(self) -> str: - return f"<{self} @ {id(self)}>" - - -def parse_tag(tag: str) -> FrozenSet[Tag]: - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]: - value = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string: str) -> str: - return string.replace(".", "_").replace("-", "_") - - -def _abi3_applies(python_version: PythonVersion) -> bool: - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]: - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append(f"cp{version}") - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version: Optional[PythonVersion] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - if not python_version: - python_version = sys.version_info[:2] - - interpreter = f"cp{_version_nodot(python_version[:2])}" - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms) - yield from (Tag(interpreter, "none", platform_) for platform_ in platforms) - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi() -> Iterator[str]: - abi = sysconfig.get_config_var("SOABI") - if abi: - yield _normalize_string(abi) - - -def generic_tags( - interpreter: Optional[str] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - platforms = list(platforms or platform_tags()) - abis = list(abis) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]: - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield f"py{_version_nodot(py_version[:2])}" - yield f"py{py_version[0]}" - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield f"py{_version_nodot((py_version[0], minor))}" - - -def compatible_tags( - python_version: Optional[PythonVersion] = None, - interpreter: Optional[str] = None, - platforms: Optional[Iterable[str]] = None, -) -> Iterator[Tag]: - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str: - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]: - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - if cpu_arch in {"arm64", "x86_64"}: - formats.append("universal2") - - if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}: - formats.append("universal") - - return formats - - -def mac_platforms( - version: Optional[MacVersion] = None, arch: Optional[str] = None -) -> Iterator[str]: - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - - if (10, 0) <= version and version < (11, 0): - # Prior to Mac OS 11, each yearly release of Mac OS bumped the - # "minor" version number. The major version was always 10. - for minor_version in range(version[1], -1, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=10, minor=minor_version, binary_format=binary_format - ) - - if version >= (11, 0): - # Starting with Mac OS 11, each yearly release bumps the major version - # number. The minor versions are now the midyear updates. - for major_version in range(version[0], 10, -1): - compat_version = major_version, 0 - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=major_version, minor=0, binary_format=binary_format - ) - - if version >= (11, 0): - # Mac OS 11 on x86_64 is compatible with binaries from previous releases. - # Arm64 support was introduced in 11.0, so no Arm binaries from previous - # releases exist. - # - # However, the "universal2" binary format can have a - # macOS version earlier than 11.0 when the x86_64 part of the binary supports - # that version of macOS. - if arch == "x86_64": - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - else: - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_format = "universal2" - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]: - linux = _normalize_string(sysconfig.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - _, arch = linux.split("_", 1) - yield from _manylinux.platform_tags(linux, arch) - yield from _musllinux.platform_tags(arch) - yield linux - - -def _generic_platforms() -> Iterator[str]: - yield _normalize_string(sysconfig.get_platform()) - - -def platform_tags() -> Iterator[str]: - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name() -> str: - """ - Returns the name of the running interpreter. - """ - name = sys.implementation.name - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(*, warn: bool = False) -> str: - """ - Returns the version of the running interpreter. - """ - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version: PythonVersion) -> str: - return "".join(map(str, version)) - - -def sys_tags(*, warn: bool = False) -> Iterator[Tag]: - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - - interp_name = interpreter_name() - if interp_name == "cp": - yield from cpython_tags(warn=warn) - else: - yield from generic_tags() - - if interp_name == "pp": - yield from compatible_tags(interpreter="pp3") - else: - yield from compatible_tags() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/status.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/status.py deleted file mode 100644 index 09eff405ec194ee2884f203cb48c5df54ff0b9c7..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/status.py +++ /dev/null @@ -1,132 +0,0 @@ -from types import TracebackType -from typing import Optional, Type - -from .console import Console, RenderableType -from .jupyter import JupyterMixin -from .live import Live -from .spinner import Spinner -from .style import StyleType - - -class Status(JupyterMixin): - """Displays a status indicator with a 'spinner' animation. - - Args: - status (RenderableType): A status renderable (str or Text typically). - console (Console, optional): Console instance to use, or None for global console. Defaults to None. - spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to "dots". - spinner_style (StyleType, optional): Style of spinner. Defaults to "status.spinner". - speed (float, optional): Speed factor for spinner animation. Defaults to 1.0. - refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5. - """ - - def __init__( - self, - status: RenderableType, - *, - console: Optional[Console] = None, - spinner: str = "dots", - spinner_style: StyleType = "status.spinner", - speed: float = 1.0, - refresh_per_second: float = 12.5, - ): - self.status = status - self.spinner_style = spinner_style - self.speed = speed - self._spinner = Spinner(spinner, text=status, style=spinner_style, speed=speed) - self._live = Live( - self.renderable, - console=console, - refresh_per_second=refresh_per_second, - transient=True, - ) - - @property - def renderable(self) -> Spinner: - return self._spinner - - @property - def console(self) -> "Console": - """Get the Console used by the Status objects.""" - return self._live.console - - def update( - self, - status: Optional[RenderableType] = None, - *, - spinner: Optional[str] = None, - spinner_style: Optional[StyleType] = None, - speed: Optional[float] = None, - ) -> None: - """Update status. - - Args: - status (Optional[RenderableType], optional): New status renderable or None for no change. Defaults to None. - spinner (Optional[str], optional): New spinner or None for no change. Defaults to None. - spinner_style (Optional[StyleType], optional): New spinner style or None for no change. Defaults to None. - speed (Optional[float], optional): Speed factor for spinner animation or None for no change. Defaults to None. - """ - if status is not None: - self.status = status - if spinner_style is not None: - self.spinner_style = spinner_style - if speed is not None: - self.speed = speed - if spinner is not None: - self._spinner = Spinner( - spinner, text=self.status, style=self.spinner_style, speed=self.speed - ) - self._live.update(self.renderable, refresh=True) - else: - self._spinner.update( - text=self.status, style=self.spinner_style, speed=self.speed - ) - - def start(self) -> None: - """Start the status animation.""" - self._live.start() - - def stop(self) -> None: - """Stop the spinner animation.""" - self._live.stop() - - def __rich__(self) -> RenderableType: - return self.renderable - - def __enter__(self) -> "Status": - self.start() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.stop() - - -if __name__ == "__main__": # pragma: no cover - - from time import sleep - - from .console import Console - - console = Console() - with console.status("[magenta]Covid detector booting up") as status: - sleep(3) - console.log("Importing advanced AI") - sleep(3) - console.log("Advanced Covid AI Ready") - sleep(3) - status.update(status="[bold blue] Scanning for Covid", spinner="earth") - sleep(3) - console.log("Found 10,000,000,000 copies of Covid32.exe") - sleep(3) - status.update( - status="[bold red]Moving Covid32.exe to Trash", - spinner="bouncingBall", - spinner_style="yellow", - ) - sleep(5) - console.print("[bold green]Covid deleted successfully") diff --git a/spaces/Tej3/DepthEstimation/app.py b/spaces/Tej3/DepthEstimation/app.py deleted file mode 100644 index ab46d1b3e7f5504a7916d65d0a1717b406586e35..0000000000000000000000000000000000000000 --- a/spaces/Tej3/DepthEstimation/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import gradio as gr -import torch -from models.pretrained_decv2 import enc_dec_model -from models.densenet_v2 import Densenet -from models.unet_resnet18 import ResNet18UNet -from models.unet_resnet50 import UNetWithResnet50Encoder -import numpy as np -import cv2 - -# kb cropping -def cropping(img): - h_im, w_im = img.shape[:2] - - margin_top = int(h_im - 352) - margin_left = int((w_im - 1216) / 2) - - img = img[margin_top: margin_top + 352, - margin_left: margin_left + 1216] - - return img - -def load_model(ckpt, model, optimizer=None): - ckpt_dict = torch.load(ckpt, map_location='cpu') - # keep backward compatibility - if 'model' not in ckpt_dict and 'optimizer' not in ckpt_dict: - state_dict = ckpt_dict - else: - state_dict = ckpt_dict['model'] - weights = {} - for key, value in state_dict.items(): - if key.startswith('module.'): - weights[key[len('module.'):]] = value - else: - weights[key] = value - - model.load_state_dict(weights) - - if optimizer is not None: - optimizer_state = ckpt_dict['optimizer'] - optimizer.load_state_dict(optimizer_state) - -DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' -print(DEVICE) -CWD = "." -CKPT_FILE_NAMES = { - 'Indoor':{ - 'Resnet_enc':'resnet_nyu_best.ckpt', - 'Unet':'resnet18_unet_epoch_08_model_kitti_and_nyu.ckpt', - 'Densenet_enc':'densenet_epoch_15_model.ckpt' - }, - 'Outdoor':{ - 'Resnet_enc':'resnet_encdecmodel_epoch_05_model_nyu_and_kitti.ckpt', - 'Unet':'resnet50_unet_epoch_02_model_nyuandkitti.ckpt', - 'Densenet_enc':'densenet_nyu_then_kitti_epoch_10_model.ckpt' - } -} -MODEL_CLASSES = { - 'Indoor': { - 'Resnet_enc':enc_dec_model(max_depth = 10), - 'Unet':ResNet18UNet(max_depth = 10), - 'Densenet_enc':Densenet(max_depth = 10) - }, - - 'Outdoor': { - 'Resnet_enc':enc_dec_model(max_depth = 80), - 'Unet':UNetWithResnet50Encoder(max_depth = 80), - 'Densenet_enc':Densenet(max_depth = 80) - }, -} -location_types = ['Indoor', 'Outdoor'] -Models = ['Resnet_enc','Unet','Densenet_enc'] -for location in location_types: - for model in Models: - ckpt_dir = f"{CWD}/ckpt/{CKPT_FILE_NAMES[location][model]}" - load_model(ckpt_dir, MODEL_CLASSES[location][model]) - - - -def predict(location, model_name, img): - # ckpt_dir = f"{CWD}/ckpt/{CKPT_FILE_NAMES[location][model_name]}" - # if location == 'nyu': - # max_depth = 10 - # else: - # max_depth = 80 - # model = MODEL_CLASSES[location][model_name](max_depth).to(DEVICE) - model = MODEL_CLASSES[location][model_name].to(DEVICE) - # load_model(ckpt_dir,model) - # print(img.shape) - # assert False - if img.shape == (375,1242,3): - img = cropping(img) - img = torch.tensor(img).permute(2, 0, 1).float().to(DEVICE) - input_RGB = img.unsqueeze(0) - print(input_RGB.shape) - with torch.no_grad(): - pred = model(input_RGB) - pred_d = pred['pred_d'] - pred_d_numpy = pred_d.squeeze().cpu().numpy() - # pred_d_numpy = (pred_d_numpy - pred_d_numpy.mean())/pred_d_numpy.std() - pred_d_numpy = np.clip((pred_d_numpy / pred_d_numpy[15:,:].max()) * 255, 0,255) - # pred_d_numpy = (pred_d_numpy / pred_d_numpy.max()) * 255 - pred_d_numpy = pred_d_numpy.astype(np.uint8) - pred_d_color = cv2.applyColorMap(pred_d_numpy, cv2.COLORMAP_RAINBOW) - pred_d_color = cv2.cvtColor(pred_d_color, cv2.COLOR_BGR2RGB) - # del model - return pred_d_color - -with gr.Blocks() as demo: - gr.Markdown("# Monocular Depth Estimation") - with gr.Row(): - location = gr.Radio(choices=['Indoor', 'Outdoor'],value='Indoor', label = "Select Location Type") - model_name = gr.Radio(['Unet', 'Resnet_enc', 'Densenet_enc'],value="Densenet_enc" ,label="Select model") - with gr.Row(): - with gr.Column(): - input_image = gr.Image(label = "Input Image for Depth Estimation") - with gr.Column(): - output_depth_map = gr.Image(label = "Depth prediction Heatmap") - with gr.Row(): - predict_btn = gr.Button("Generate Depthmap") - predict_btn.click(fn=predict, inputs=[location, model_name, input_image], outputs=output_depth_map) - with gr.Row(): - gr.Examples(['./demo_data/Bathroom.jpg', './demo_data/Bedroom.jpg', './demo_data/Bookstore.jpg', './demo_data/Classroom.jpg', './demo_data/Computerlab.jpg', './demo_data/kitti_1.png'], inputs=input_image) -demo.launch() \ No newline at end of file diff --git a/spaces/Teklia/doc-ufcn/tools.py b/spaces/Teklia/doc-ufcn/tools.py deleted file mode 100644 index e2a11926789acb11c9bf37bafb89b659a0128d88..0000000000000000000000000000000000000000 --- a/spaces/Teklia/doc-ufcn/tools.py +++ /dev/null @@ -1,49 +0,0 @@ -# -*- coding: utf-8 -*- - -from dataclasses import dataclass, field - -from doc_ufcn import models -from doc_ufcn.main import DocUFCN - - -@dataclass -class UFCNModel: - name: str - colors: list - title: str - description: str - classes: list = field(default_factory=list) - model: DocUFCN = None - - def get_class_name(self, channel_idx): - return self.classes[channel_idx] - - @property - def loaded(self): - return self.model is not None - - @property - def num_channels(self): - return len(self.classes) - - def load(self): - # Download the model - model_path, parameters = models.download_model(name=self.name) - - # Store classes - self.classes = parameters["classes"] - - # Check that the number of colors is equal to the number of classes -1 - assert self.num_channels - 1 == len( - self.colors - ), f"The parameter classes_colors was filled with the wrong number of colors. {self.num_channels-1} colors are expected instead of {len(self.colors)}." - - # Load the model - self.model = DocUFCN( - no_of_classes=len(self.classes), - model_input_size=parameters["input_size"], - device="cpu", - ) - self.model.load( - model_path=model_path, mean=parameters["mean"], std=parameters["std"] - ) diff --git a/spaces/Thafx/sdrvxl2/README.md b/spaces/Thafx/sdrvxl2/README.md deleted file mode 100644 index 6c7ae6c0912177ea04e1bbf154a6c20199cd1542..0000000000000000000000000000000000000000 --- a/spaces/Thafx/sdrvxl2/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Realistic Vision XL V2.0 -emoji: 📷 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: true -license: mit -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- realistic-vision -models: -- SG161222/RealVisXL_V2.0 ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Vavavoom/stable-diffusion-depth2img/Dockerfile b/spaces/Vavavoom/stable-diffusion-depth2img/Dockerfile deleted file mode 100644 index 520ed0021f743919019b6f16cf4d4a13766eefca..0000000000000000000000000000000000000000 --- a/spaces/Vavavoom/stable-diffusion-depth2img/Dockerfile +++ /dev/null @@ -1,52 +0,0 @@ -FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu18.04 -CMD nvidia-smi - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - git \ - make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev \ - ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \ - && rm -rf /var/lib/apt/lists/* - && git lfs install - - -RUN useradd -ms /bin/bash user -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -RUN curl https://pyenv.run | bash -ENV PATH=$HOME/.pyenv/shims:$HOME/.pyenv/bin:$PATH -RUN pyenv install 3.8.15 && \ - pyenv global 3.8.15 && \ - pyenv rehash && \ - pip install --no-cache-dir --upgrade pip setuptools wheel - -ENV WORKDIR=/code -WORKDIR $WORKDIR -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -COPY requirements.txt $WORKDIR/requirements.txt -RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt -RUN pip install ninja - -RUN curl https://github.com/isl-org/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt --create-dirs -o $WORKDIR/midas_models/dpt_hybrid-midas-501f0c75.pt -RUN curl https://github.com/isl-org/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt --create-dirs -o $WORKDIR/midas_models/dpt_large-midas-2f21e586.pt - -COPY . . - -ARG TORCH_CUDA_ARCH_LIST=7.5+PTX - -USER root -RUN chown -R user:user $HOME -RUN chmod -R 777 $HOME -RUN chown -R user:user $WORKDIR -RUN chmod -R 777 $WORKDIR - -USER user - -CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/Vikas01/Attendence_System/app.py b/spaces/Vikas01/Attendence_System/app.py deleted file mode 100644 index 949fdf5b32bc702f3eff698cc0b87a54f600949f..0000000000000000000000000000000000000000 --- a/spaces/Vikas01/Attendence_System/app.py +++ /dev/null @@ -1,174 +0,0 @@ -from flask import * -from PIL import Image - -import face_recognition -import cv2 -import numpy as np -import csv -from datetime import datetime - - -################# -from flask_socketio import SocketIO,emit -import base64 - - -################## -cnt =1 - - -app = Flask (__name__ ) - -################# -app.config['SECRET_KEY'] = 'secret!' -socket = SocketIO(app,async_mode="eventlet") -####################### - - -###################### - - - -# def base64_to_image(base64_string): -# # Extract the base64 encoded binary data from the input string -# base64_data = base64_string.split(",")[1] -# # Decode the base64 data to bytes -# image_bytes = base64.b64decode(base64_data) -# # Convert the bytes to numpy array -# image_array = np.frombuffer(image_bytes, dtype=np.uint8) -# # Decode the numpy array as an image using OpenCV -# image = cv2.imdecode(image_array, cv2.IMREAD_COLOR) -# return image - -# @socket.on("connect") -# def test_connect(): -# print("Connected") -# emit("my response", {"data": "Connected"}) - -# @socket.on("image") -# def receive_image(image): -# global cnt -# s = True -# while s : - -# # Decode the base64-encoded image data -# image = base64_to_image(image) - -# known_faces_names = ["Sarwan Sir", "Vikas","Lalit","Jasmeen","Anita Ma'am"] -# known_face_encodings = [] - -# # Load known face encodings -# sir_image = face_recognition.load_image_file("photos/sir.jpeg") -# sir_encoding = face_recognition.face_encodings(sir_image)[0] - -# vikas_image = face_recognition.load_image_file("photos/vikas.jpg") -# vikas_encoding = face_recognition.face_encodings(vikas_image)[0] - -# lalit_image = face_recognition.load_image_file("photos/lalit.jpg") -# lalit_encoding = face_recognition.face_encodings(lalit_image)[0] - -# jasmine_image = face_recognition.load_image_file("photos/jasmine.jpg") -# jasmine_encoding = face_recognition.face_encodings(jasmine_image)[0] - -# maam_image = face_recognition.load_image_file("photos/maam.png") -# maam_encoding = face_recognition.face_encodings(maam_image)[0] - -# known_face_encodings = [sir_encoding, vikas_encoding,lalit_encoding,jasmine_encoding,maam_encoding] - -# students = known_faces_names.copy() - -# face_locations = [] -# face_encodings = [] -# face_names = [] - -# # now = datetime.now() -# # current_date = now.strftime("%Y-%m-%d") -# # csv_file = open(f"{current_date}.csv", "a+", newline="") - -# # # csv_writer = csv.writer(csv_file) -# small_frame = cv2.resize(image, (0, 0), fx=0.25, fy=0.25) -# rgb_small_frame = small_frame[:, :, ::-1] -# # # emit("result",{"name":"level " +str(cnt),"score":str(len(face_encodings))}) - -# face_locations = face_recognition.face_locations(rgb_small_frame) -# face_encodings = face_recognition.face_encodings(small_frame, face_locations) -# face_names = [] -# emit("result",{"name":"level2 " +str(cnt),"score":str(len(face_encodings))}) -# cnt = cnt +1 -# for face_encoding in face_encodings: -# # emit("result",{"name":"in for ","score":"34"}) -# matches = face_recognition.compare_faces(known_face_encodings, face_encoding) -# name = "" -# face_distance = face_recognition.face_distance(known_face_encodings, face_encoding) -# best_match_index = np.argmin(face_distance) -# if matches[best_match_index]: -# name = known_faces_names[best_match_index] - -# face_names.append(name) -# s = False -# break - -# emit("result",{"name":str(name),"score":"myScore"}) - - - - -# # for name in face_names: -# # if name in known_faces_names and name in students and name not in existing_names: -# # students.remove(name) -# # print(students) -# # print(f"Attendance recorded for {name}") -# # current_time = now.strftime("%H-%M-%S") -# # csv_writer.writerow([name, current_time, "Present"]) -# # existing_names.add(name) # Add the name to the set of existing names - -@app.route ("/") -def home(): - return render_template("index.html") - - -@app.route('/table') -def show_table(): - # Get the current date - current_date = datetime.now().strftime("%Y-%m-%d") - # Read the CSV file to get attendance data - attendance=[] - try: - with open(f"{current_date}.csv", newline="") as csv_file: - csv_reader = csv.reader(csv_file) - attendance = list(csv_reader) - except FileNotFoundError: - pass - # Render the table.html template and pass the attendance data - return render_template('attendance.html', attendance=attendance) - - - -if __name__ == '__main__': - socket.run(app,host="0.0.0.0", port=7860) - -########################################################################### -# @app.route('/table') -# def show_table(): -# # Get the current date -# current_date = datetime.now().strftime("%Y-%m-%d") -# # Read the CSV file to get attendance data -# attendance=[] -# try: -# with open(f"{current_date}.csv", newline="") as csv_file: -# csv_reader = csv.reader(csv_file) -# attendance = list(csv_reader) -# except FileNotFoundError: -# pass -# # Render the table.html template and pass the attendance data -# return render_template('attendance.html', attendance=attendance) - - - - - - -# if __name__ == "__main__": -# socket.run(app,host="0.0.0.0", port=7860) - - \ No newline at end of file diff --git a/spaces/VishnuSaiTeja/Predictor/app.py b/spaces/VishnuSaiTeja/Predictor/app.py deleted file mode 100644 index 0e4055fc756f32d25fbd34f0ca4ec9c862d1f940..0000000000000000000000000000000000000000 --- a/spaces/VishnuSaiTeja/Predictor/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import streamlit as st -import sklearn -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.ensemble import RandomForestClassifier -import pickle -import numpy as np - -try: - with open('model.pickle', 'rb') as file: - classifier = pickle.load(file) -except Exception as e: - st.error(f"Error loading model: {str(e)}") - -with open('Countvectorize.pickle','rb') as file : - sparserize = pickle.load(file) - -st.title('Text Classifier App') - -user_input = [st.text_input('Enter text for classification:', '')] - -if st.button('Predict'): - if user_input != None : - - input = sparserize.transform(user_input) - - output = classifier.predict(input) - - st.write(f'Prediction: {output[0]}') - - else : - st.write('Enter text for classification') - - - - - diff --git a/spaces/VoiceHero69/changer/webui/modules/util.py b/spaces/VoiceHero69/changer/webui/modules/util.py deleted file mode 100644 index 50510deeb7ade496e1e7a7b9f7ef656d415655fb..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/modules/util.py +++ /dev/null @@ -1,64 +0,0 @@ -import shlex -import subprocess -import tempfile -import traceback -from typing import Any - -import PIL -import gradio -import numpy as np -import scipy.io.wavfile -from gradio import processing_utils, utils -from matplotlib import pyplot as plt - -import setup_tools.os -from webui.args import args -from webui.ui.tabs import settings - - -def showwaves( - audio: str | tuple[int, np.ndarray] -): - try: - if isinstance(audio, str): - audio_file = audio - audio = processing_utils.audio_from_file(audio) - else: - tmp_wav = tempfile.NamedTemporaryFile(suffix=".wav", delete=False) - processing_utils.audio_to_file(audio[0], audio[1], tmp_wav.name, format="wav") - audio_file = tmp_wav.name - - output_mp4 = tempfile.NamedTemporaryFile(suffix=".mkv", delete=False) - - command = f'ffmpeg -y -i {audio_file} -filter_complex "[0:a]showwaves=s=1280x720:mode=line,format=yuv420p[v]" -map "[v]" -map 0:a -preset veryfast -c:v libx264 -c:a copy {output_mp4.name}' - - if not setup_tools.os.is_windows(): - command = shlex.split(command) - - run = subprocess.run(command) - return output_mp4.name if run.returncode == 0 else None - except Exception as e: - traceback.print_exception(e) - return None - - -def make_waveform( - audio: str | tuple[int, np.ndarray], - *, - bg_color: str = "#f3f4f6", - bg_image: str | None = None, - fg_alpha: float = 1.00, # (was 0.75) - bars_color: str | tuple[str, str] = ("#65B5FF", "#1B76FF"), # (was ("#fbbf24", "#ea580c")) - bar_count: int = 50, - bar_width: float = 0.6, - wav_type: str = None -): - if wav_type is None: - wav_type = 'gradio' - match wav_type: - case 'showwaves': - return showwaves(audio) - case 'gradio': - return gradio.make_waveform(audio, bg_color=bg_color, bg_image=bg_image, fg_alpha=fg_alpha, bars_color=bars_color, bar_count=bar_count, bar_width=bar_width) - case 'none' | _: - return None diff --git a/spaces/Wauplin/pynecone-on-spaces-template/default_app/default_app.py b/spaces/Wauplin/pynecone-on-spaces-template/default_app/default_app.py deleted file mode 100644 index 18803607a728e537b3e9c21c6413157933c69735..0000000000000000000000000000000000000000 --- a/spaces/Wauplin/pynecone-on-spaces-template/default_app/default_app.py +++ /dev/null @@ -1,58 +0,0 @@ -from pathlib import Path - -import pynecone as pc - - -class State(pc.State): - count: int = 0 - - def increment(self): - self.count += 1 - - def decrement(self): - self.count -= 1 - - -def index(): - return pc.center( - pc.vstack( - pc.box( - height="20px", - ), - pc.box( - pc.heading("Pynecone x Spaces 🚀"), - ), - pc.hstack( - pc.button( - "Decrement", - color_scheme="red", - border_radius="1em", - on_click=State.decrement, - ), - pc.heading(State.count, font_size="2em"), - pc.button( - "Increment", - color_scheme="green", - border_radius="1em", - on_click=State.increment, - ), - ), - pc.box( - height="50px", - ), - pc.box( - # Hacky way to append README content to the page - pc.markdown( - (Path(__file__).parent.parent / "README.md") - .read_text() - .split("---")[-1] - .strip() - ) - ), - ) - ) - - -app = pc.App(state=State) -app.add_page(index) -app.compile() diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/critics.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/critics.py deleted file mode 100644 index adea9e5cfbb7493ff352195fedb1158f84b79ae2..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/critics.py +++ /dev/null @@ -1,47 +0,0 @@ -from fastai.basic_train import Learner -from fastai.core import * -from fastai.layers import NormType, conv_layer -from fastai.torch_core import * -from fastai.vision import * -from fastai.vision.data import ImageDataBunch -from fastai.vision.gan import AdaptiveLoss, accuracy_thresh_expand - -_conv_args = dict(leaky=0.2, norm_type=NormType.Spectral) - - -def _conv(ni: int, nf: int, ks: int = 3, stride: int = 1, **kwargs): - return conv_layer(ni, nf, ks=ks, stride=stride, **_conv_args, **kwargs) - - -def custom_gan_critic( - n_channels: int = 3, nf: int = 256, n_blocks: int = 3, p: int = 0.15 -): - "Critic to train a `GAN`." - layers = [_conv(n_channels, nf, ks=4, stride=2), nn.Dropout2d(p / 2)] - for i in range(n_blocks): - layers += [ - _conv(nf, nf, ks=3, stride=1), - nn.Dropout2d(p), - _conv(nf, nf * 2, ks=4, stride=2, self_attention=(i == 0)), - ] - nf *= 2 - layers += [ - _conv(nf, nf, ks=3, stride=1), - _conv(nf, 1, ks=4, bias=False, padding=0, use_activ=False), - Flatten(), - ] - return nn.Sequential(*layers) - - -def colorize_crit_learner( - data: ImageDataBunch, - loss_critic=AdaptiveLoss(nn.BCEWithLogitsLoss()), - nf: int = 256, -) -> Learner: - return Learner( - data, - custom_gan_critic(nf=nf), - metrics=accuracy_thresh_expand, - loss_func=loss_critic, - wd=1e-3, - ) diff --git a/spaces/Xenova/semantic-image-search-client/style.css b/spaces/Xenova/semantic-image-search-client/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/text/chinese.py b/spaces/XzJosh/Aatrox-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Aatrox-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/XzJosh/Ava-Bert-VITS2/models.py b/spaces/XzJosh/Ava-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/XzJosh/XingTong-Bert-VITS2/README.md b/spaces/XzJosh/XingTong-Bert-VITS2/README.md deleted file mode 100644 index 6c49001fd3cc96d9914d472a057c3bf4b3a78551..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/XingTong-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI星瞳 ---- \ No newline at end of file diff --git a/spaces/YE01/saya-vits/text/shanghainese.py b/spaces/YE01/saya-vits/text/shanghainese.py deleted file mode 100644 index cb29c24a08d2e406e8399cf7bc9fe5cb43cb9c61..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/YUANAI/DiffspeechResearch/modules/tts/diffspeech/shallow_diffusion_tts.py b/spaces/YUANAI/DiffspeechResearch/modules/tts/diffspeech/shallow_diffusion_tts.py deleted file mode 100644 index e3c3a6d891a7721949e05f6065c194aaae8ea9e8..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/tts/diffspeech/shallow_diffusion_tts.py +++ /dev/null @@ -1,279 +0,0 @@ -import math -import random -from functools import partial -from inspect import isfunction -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from tqdm import tqdm - -from modules.tts.fs2_orig import FastSpeech2Orig -from modules.tts.diffspeech.net import DiffNet -from modules.tts.commons.align_ops import expand_states - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -# gaussian diffusion trainer class - -def extract(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() - - -def linear_beta_schedule(timesteps, max_beta=0.01): - """ - linear schedule - """ - betas = np.linspace(1e-4, max_beta, timesteps) - return betas - - -def cosine_beta_schedule(timesteps, s=0.008): - """ - cosine schedule - as proposed in https://openreview.net/forum?id=-NEXDKk8gZ - """ - steps = timesteps + 1 - x = np.linspace(0, steps, steps) - alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2 - alphas_cumprod = alphas_cumprod / alphas_cumprod[0] - betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) - return np.clip(betas, a_min=0, a_max=0.999) - - -beta_schedule = { - "cosine": cosine_beta_schedule, - "linear": linear_beta_schedule, -} - - -DIFF_DECODERS = { - 'wavenet': lambda hp: DiffNet(hp), -} - - -class AuxModel(FastSpeech2Orig): - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, spk_id=None, - f0=None, uv=None, energy=None, infer=False, **kwargs): - ret = {} - encoder_out = self.encoder(txt_tokens) # [B, T, C] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - style_embed = self.forward_style_embed(spk_embed, spk_id) - - # add dur - dur_inp = (encoder_out + style_embed) * src_nonpadding - mel2ph = self.forward_dur(dur_inp, mel2ph, txt_tokens, ret) - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - decoder_inp = decoder_inp_ = expand_states(encoder_out, mel2ph) - - # add pitch and energy embed - if self.hparams['use_pitch_embed']: - pitch_inp = (decoder_inp_ + style_embed) * tgt_nonpadding - decoder_inp = decoder_inp + self.forward_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out) - - # add pitch and energy embed - if self.hparams['use_energy_embed']: - energy_inp = (decoder_inp_ + style_embed) * tgt_nonpadding - decoder_inp = decoder_inp + self.forward_energy(energy_inp, energy, ret) - - # decoder input - ret['decoder_inp'] = decoder_inp = (decoder_inp + style_embed) * tgt_nonpadding - if self.hparams['dec_inp_add_noise']: - B, T, _ = decoder_inp.shape - z = kwargs.get('adv_z', torch.randn([B, T, self.z_channels])).to(decoder_inp.device) - ret['adv_z'] = z - decoder_inp = torch.cat([decoder_inp, z], -1) - decoder_inp = self.dec_inp_noise_proj(decoder_inp) * tgt_nonpadding - if kwargs['skip_decoder']: - return ret - ret['mel_out'] = self.forward_decoder(decoder_inp, tgt_nonpadding, ret, infer=infer, **kwargs) - return ret - - -class GaussianDiffusion(nn.Module): - def __init__(self, dict_size, hparams, out_dims=None): - super().__init__() - self.hparams = hparams - out_dims = hparams['audio_num_mel_bins'] - denoise_fn = DIFF_DECODERS[hparams['diff_decoder_type']](hparams) - timesteps = hparams['timesteps'] - K_step = hparams['K_step'] - loss_type = hparams['diff_loss_type'] - spec_min = hparams['spec_min'] - spec_max = hparams['spec_max'] - - self.denoise_fn = denoise_fn - self.fs2 = AuxModel(dict_size, hparams) - self.mel_bins = out_dims - - if hparams['schedule_type'] == 'linear': - betas = linear_beta_schedule(timesteps, hparams['max_beta']) - else: - betas = cosine_beta_schedule(timesteps) - - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.K_step = K_step - self.loss_type = loss_type - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']]) - self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']]) - - def q_mean_variance(self, x_start, t): - mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = extract(1. - self.alphas_cumprod, t, x_start.shape) - log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, cond, clip_denoised: bool): - noise_pred = self.denoise_fn(x, t, cond=cond) - x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred) - - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return ( - extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def p_losses(self, x_start, t, cond, noise=None, nonpadding=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - x_recon = self.denoise_fn(x_noisy, t, cond) - - if self.loss_type == 'l1': - if nonpadding is not None: - loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean() - else: - # print('are you sure w/o nonpadding?') - loss = (noise - x_recon).abs().mean() - - elif self.loss_type == 'l2': - loss = F.mse_loss(noise, x_recon) - else: - raise NotImplementedError() - - return loss - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, spk_id=None, - ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs): - b, *_, device = *txt_tokens.shape, txt_tokens.device - ret = self.fs2(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, spk_id=spk_id, - f0=f0, uv=uv, energy=energy, infer=infer, skip_decoder=(not infer), **kwargs) - cond = ret['decoder_inp'].transpose(1, 2) - - if not infer: - t = torch.randint(0, self.K_step, (b,), device=device).long() - x = ref_mels - x = self.norm_spec(x) - x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - ret['diff_loss'] = self.p_losses(x, t, cond) - # nonpadding = (mel2ph != 0).float() - # ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding) - ret['mel_out'] = None - else: - ret['fs2_mel'] = ret['mel_out'] - fs2_mels = ret['mel_out'] - t = self.K_step - fs2_mels = self.norm_spec(fs2_mels) - fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :] - - x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long()) - if self.hparams.get('gaussian_start') is not None and self.hparams['gaussian_start']: - print('===> gaussian start.') - shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2]) - x = torch.randn(shape, device=device) - for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - x = x[:, 0].transpose(1, 2) - ret['mel_out'] = self.denorm_spec(x) - - return ret - - def norm_spec(self, x): - return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1 - - def denorm_spec(self, x): - return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min - - def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph): - return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph) - - def out2mel(self, x): - return x \ No newline at end of file diff --git a/spaces/YangHao520/testShare/main.py b/spaces/YangHao520/testShare/main.py deleted file mode 100644 index 171f7c5bb6cb520ee0eb5da9a320444d5bf4fe28..0000000000000000000000000000000000000000 --- a/spaces/YangHao520/testShare/main.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -def sketch_recognition(img): - pass -def question_answer(context,question): - return context+'ASD',question+'SSS' -def img2(img): - return img - -def detect(): - gr.Interface(fn=sketch_recognition,inputs='sketchpad',outputs='label').launch() - -def QA(): - gra=gr.Interface(fn=question_answer,inputs=['text','text'],outputs=['textbox','text']) - gra.launch() -def imageTest(): - gra=gr.Interface(fn=img2,inputs='image',outputs='image') - gra.launch() -def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} -def greet(name): - return "Hello"+name - -def main(): - #detect() - #QA() #文本 - #imageTest() - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label",live=True,title='Test for gradio',description='test') - demo.launch(share=True,auth=("admin","123"),auth_message='欢迎来到这里',inbrowser=True,enable_queue=False) -if __name__=="__main__": - main() \ No newline at end of file diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/vae_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/vae_flax.py deleted file mode 100644 index 7ecda9a6e9a0eafe8c9da2abb4a9dc04948a1289..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/vae_flax.py +++ /dev/null @@ -1,858 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# JAX implementation of VQGAN from taming-transformers https://github.com/CompVis/taming-transformers - -import math -from functools import partial -from typing import Tuple - -import flax -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict - -from ..configuration_utils import ConfigMixin, flax_register_to_config -from ..modeling_flax_utils import FlaxModelMixin -from ..utils import BaseOutput - - -@flax.struct.dataclass -class FlaxDecoderOutput(BaseOutput): - """ - Output of decoding method. - - Args: - sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`): - Decoded output sample of the model. Output of the last layer of the model. - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - - sample: jnp.ndarray - - -@flax.struct.dataclass -class FlaxAutoencoderKLOutput(BaseOutput): - """ - Output of AutoencoderKL encoding method. - - Args: - latent_dist (`FlaxDiagonalGaussianDistribution`): - Encoded outputs of `Encoder` represented as the mean and logvar of `FlaxDiagonalGaussianDistribution`. - `FlaxDiagonalGaussianDistribution` allows for sampling latents from the distribution. - """ - - latent_dist: "FlaxDiagonalGaussianDistribution" - - -class FlaxUpsample2D(nn.Module): - """ - Flax implementation of 2D Upsample layer - - Args: - in_channels (`int`): - Input channels - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - - in_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.in_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - batch, height, width, channels = hidden_states.shape - hidden_states = jax.image.resize( - hidden_states, - shape=(batch, height * 2, width * 2, channels), - method="nearest", - ) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxDownsample2D(nn.Module): - """ - Flax implementation of 2D Downsample layer - - Args: - in_channels (`int`): - Input channels - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - - in_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.conv = nn.Conv( - self.in_channels, - kernel_size=(3, 3), - strides=(2, 2), - padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states): - pad = ((0, 0), (0, 1), (0, 1), (0, 0)) # pad height and width dim - hidden_states = jnp.pad(hidden_states, pad_width=pad) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class FlaxResnetBlock2D(nn.Module): - """ - Flax implementation of 2D Resnet Block. - - Args: - in_channels (`int`): - Input channels - out_channels (`int`): - Output channels - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - groups (:obj:`int`, *optional*, defaults to `32`): - The number of groups to use for group norm. - use_nin_shortcut (:obj:`bool`, *optional*, defaults to `None`): - Whether to use `nin_shortcut`. This activates a new layer inside ResNet block - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - - in_channels: int - out_channels: int = None - dropout: float = 0.0 - groups: int = 32 - use_nin_shortcut: bool = None - dtype: jnp.dtype = jnp.float32 - - def setup(self): - out_channels = self.in_channels if self.out_channels is None else self.out_channels - - self.norm1 = nn.GroupNorm(num_groups=self.groups, epsilon=1e-6) - self.conv1 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - self.norm2 = nn.GroupNorm(num_groups=self.groups, epsilon=1e-6) - self.dropout_layer = nn.Dropout(self.dropout) - self.conv2 = nn.Conv( - out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - use_nin_shortcut = self.in_channels != out_channels if self.use_nin_shortcut is None else self.use_nin_shortcut - - self.conv_shortcut = None - if use_nin_shortcut: - self.conv_shortcut = nn.Conv( - out_channels, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - def __call__(self, hidden_states, deterministic=True): - residual = hidden_states - hidden_states = self.norm1(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.conv1(hidden_states) - - hidden_states = self.norm2(hidden_states) - hidden_states = nn.swish(hidden_states) - hidden_states = self.dropout_layer(hidden_states, deterministic) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - residual = self.conv_shortcut(residual) - - return hidden_states + residual - - -class FlaxAttentionBlock(nn.Module): - r""" - Flax Convolutional based multi-head attention block for diffusion-based VAE. - - Parameters: - channels (:obj:`int`): - Input channels - num_head_channels (:obj:`int`, *optional*, defaults to `None`): - Number of attention heads - num_groups (:obj:`int`, *optional*, defaults to `32`): - The number of groups to use for group norm - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - - """ - channels: int - num_head_channels: int = None - num_groups: int = 32 - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.num_heads = self.channels // self.num_head_channels if self.num_head_channels is not None else 1 - - dense = partial(nn.Dense, self.channels, dtype=self.dtype) - - self.group_norm = nn.GroupNorm(num_groups=self.num_groups, epsilon=1e-6) - self.query, self.key, self.value = dense(), dense(), dense() - self.proj_attn = dense() - - def transpose_for_scores(self, projection): - new_projection_shape = projection.shape[:-1] + (self.num_heads, -1) - # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) - new_projection = projection.reshape(new_projection_shape) - # (B, T, H, D) -> (B, H, T, D) - new_projection = jnp.transpose(new_projection, (0, 2, 1, 3)) - return new_projection - - def __call__(self, hidden_states): - residual = hidden_states - batch, height, width, channels = hidden_states.shape - - hidden_states = self.group_norm(hidden_states) - - hidden_states = hidden_states.reshape((batch, height * width, channels)) - - query = self.query(hidden_states) - key = self.key(hidden_states) - value = self.value(hidden_states) - - # transpose - query = self.transpose_for_scores(query) - key = self.transpose_for_scores(key) - value = self.transpose_for_scores(value) - - # compute attentions - scale = 1 / math.sqrt(math.sqrt(self.channels / self.num_heads)) - attn_weights = jnp.einsum("...qc,...kc->...qk", query * scale, key * scale) - attn_weights = nn.softmax(attn_weights, axis=-1) - - # attend to values - hidden_states = jnp.einsum("...kc,...qk->...qc", value, attn_weights) - - hidden_states = jnp.transpose(hidden_states, (0, 2, 1, 3)) - new_hidden_states_shape = hidden_states.shape[:-2] + (self.channels,) - hidden_states = hidden_states.reshape(new_hidden_states_shape) - - hidden_states = self.proj_attn(hidden_states) - hidden_states = hidden_states.reshape((batch, height, width, channels)) - hidden_states = hidden_states + residual - return hidden_states - - -class FlaxDownEncoderBlock2D(nn.Module): - r""" - Flax Resnet blocks-based Encoder block for diffusion-based VAE. - - Parameters: - in_channels (:obj:`int`): - Input channels - out_channels (:obj:`int`): - Output channels - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - num_layers (:obj:`int`, *optional*, defaults to 1): - Number of Resnet layer block - resnet_groups (:obj:`int`, *optional*, defaults to `32`): - The number of groups to use for the Resnet block group norm - add_downsample (:obj:`bool`, *optional*, defaults to `True`): - Whether to add downsample layer - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - in_channels: int - out_channels: int - dropout: float = 0.0 - num_layers: int = 1 - resnet_groups: int = 32 - add_downsample: bool = True - dtype: jnp.dtype = jnp.float32 - - def setup(self): - resnets = [] - for i in range(self.num_layers): - in_channels = self.in_channels if i == 0 else self.out_channels - - res_block = FlaxResnetBlock2D( - in_channels=in_channels, - out_channels=self.out_channels, - dropout=self.dropout, - groups=self.resnet_groups, - dtype=self.dtype, - ) - resnets.append(res_block) - self.resnets = resnets - - if self.add_downsample: - self.downsamplers_0 = FlaxDownsample2D(self.out_channels, dtype=self.dtype) - - def __call__(self, hidden_states, deterministic=True): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, deterministic=deterministic) - - if self.add_downsample: - hidden_states = self.downsamplers_0(hidden_states) - - return hidden_states - - -class FlaxUpDecoderBlock2D(nn.Module): - r""" - Flax Resnet blocks-based Decoder block for diffusion-based VAE. - - Parameters: - in_channels (:obj:`int`): - Input channels - out_channels (:obj:`int`): - Output channels - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - num_layers (:obj:`int`, *optional*, defaults to 1): - Number of Resnet layer block - resnet_groups (:obj:`int`, *optional*, defaults to `32`): - The number of groups to use for the Resnet block group norm - add_upsample (:obj:`bool`, *optional*, defaults to `True`): - Whether to add upsample layer - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - in_channels: int - out_channels: int - dropout: float = 0.0 - num_layers: int = 1 - resnet_groups: int = 32 - add_upsample: bool = True - dtype: jnp.dtype = jnp.float32 - - def setup(self): - resnets = [] - for i in range(self.num_layers): - in_channels = self.in_channels if i == 0 else self.out_channels - res_block = FlaxResnetBlock2D( - in_channels=in_channels, - out_channels=self.out_channels, - dropout=self.dropout, - groups=self.resnet_groups, - dtype=self.dtype, - ) - resnets.append(res_block) - - self.resnets = resnets - - if self.add_upsample: - self.upsamplers_0 = FlaxUpsample2D(self.out_channels, dtype=self.dtype) - - def __call__(self, hidden_states, deterministic=True): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, deterministic=deterministic) - - if self.add_upsample: - hidden_states = self.upsamplers_0(hidden_states) - - return hidden_states - - -class FlaxUNetMidBlock2D(nn.Module): - r""" - Flax Unet Mid-Block module. - - Parameters: - in_channels (:obj:`int`): - Input channels - dropout (:obj:`float`, *optional*, defaults to 0.0): - Dropout rate - num_layers (:obj:`int`, *optional*, defaults to 1): - Number of Resnet layer block - resnet_groups (:obj:`int`, *optional*, defaults to `32`): - The number of groups to use for the Resnet and Attention block group norm - attn_num_head_channels (:obj:`int`, *optional*, defaults to `1`): - Number of attention heads for each attention block - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - in_channels: int - dropout: float = 0.0 - num_layers: int = 1 - resnet_groups: int = 32 - attn_num_head_channels: int = 1 - dtype: jnp.dtype = jnp.float32 - - def setup(self): - resnet_groups = self.resnet_groups if self.resnet_groups is not None else min(self.in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - FlaxResnetBlock2D( - in_channels=self.in_channels, - out_channels=self.in_channels, - dropout=self.dropout, - groups=resnet_groups, - dtype=self.dtype, - ) - ] - - attentions = [] - - for _ in range(self.num_layers): - attn_block = FlaxAttentionBlock( - channels=self.in_channels, - num_head_channels=self.attn_num_head_channels, - num_groups=resnet_groups, - dtype=self.dtype, - ) - attentions.append(attn_block) - - res_block = FlaxResnetBlock2D( - in_channels=self.in_channels, - out_channels=self.in_channels, - dropout=self.dropout, - groups=resnet_groups, - dtype=self.dtype, - ) - resnets.append(res_block) - - self.resnets = resnets - self.attentions = attentions - - def __call__(self, hidden_states, deterministic=True): - hidden_states = self.resnets[0](hidden_states, deterministic=deterministic) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn(hidden_states) - hidden_states = resnet(hidden_states, deterministic=deterministic) - - return hidden_states - - -class FlaxEncoder(nn.Module): - r""" - Flax Implementation of VAE Encoder. - - This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - in_channels (:obj:`int`, *optional*, defaults to 3): - Input channels - out_channels (:obj:`int`, *optional*, defaults to 3): - Output channels - down_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(DownEncoderBlock2D)`): - DownEncoder block type - block_out_channels (:obj:`Tuple[str]`, *optional*, defaults to `(64,)`): - Tuple containing the number of output channels for each block - layers_per_block (:obj:`int`, *optional*, defaults to `2`): - Number of Resnet layer for each block - norm_num_groups (:obj:`int`, *optional*, defaults to `32`): - norm num group - act_fn (:obj:`str`, *optional*, defaults to `silu`): - Activation function - double_z (:obj:`bool`, *optional*, defaults to `False`): - Whether to double the last output channels - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - Parameters `dtype` - """ - in_channels: int = 3 - out_channels: int = 3 - down_block_types: Tuple[str] = ("DownEncoderBlock2D",) - block_out_channels: Tuple[int] = (64,) - layers_per_block: int = 2 - norm_num_groups: int = 32 - act_fn: str = "silu" - double_z: bool = False - dtype: jnp.dtype = jnp.float32 - - def setup(self): - block_out_channels = self.block_out_channels - # in - self.conv_in = nn.Conv( - block_out_channels[0], - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - # downsampling - down_blocks = [] - output_channel = block_out_channels[0] - for i, _ in enumerate(self.down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = FlaxDownEncoderBlock2D( - in_channels=input_channel, - out_channels=output_channel, - num_layers=self.layers_per_block, - resnet_groups=self.norm_num_groups, - add_downsample=not is_final_block, - dtype=self.dtype, - ) - down_blocks.append(down_block) - self.down_blocks = down_blocks - - # middle - self.mid_block = FlaxUNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_groups=self.norm_num_groups, - attn_num_head_channels=None, - dtype=self.dtype, - ) - - # end - conv_out_channels = 2 * self.out_channels if self.double_z else self.out_channels - self.conv_norm_out = nn.GroupNorm(num_groups=self.norm_num_groups, epsilon=1e-6) - self.conv_out = nn.Conv( - conv_out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - def __call__(self, sample, deterministic: bool = True): - # in - sample = self.conv_in(sample) - - # downsampling - for block in self.down_blocks: - sample = block(sample, deterministic=deterministic) - - # middle - sample = self.mid_block(sample, deterministic=deterministic) - - # end - sample = self.conv_norm_out(sample) - sample = nn.swish(sample) - sample = self.conv_out(sample) - - return sample - - -class FlaxDecoder(nn.Module): - r""" - Flax Implementation of VAE Decoder. - - This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - in_channels (:obj:`int`, *optional*, defaults to 3): - Input channels - out_channels (:obj:`int`, *optional*, defaults to 3): - Output channels - up_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(UpDecoderBlock2D)`): - UpDecoder block type - block_out_channels (:obj:`Tuple[str]`, *optional*, defaults to `(64,)`): - Tuple containing the number of output channels for each block - layers_per_block (:obj:`int`, *optional*, defaults to `2`): - Number of Resnet layer for each block - norm_num_groups (:obj:`int`, *optional*, defaults to `32`): - norm num group - act_fn (:obj:`str`, *optional*, defaults to `silu`): - Activation function - double_z (:obj:`bool`, *optional*, defaults to `False`): - Whether to double the last output channels - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - parameters `dtype` - """ - in_channels: int = 3 - out_channels: int = 3 - up_block_types: Tuple[str] = ("UpDecoderBlock2D",) - block_out_channels: int = (64,) - layers_per_block: int = 2 - norm_num_groups: int = 32 - act_fn: str = "silu" - dtype: jnp.dtype = jnp.float32 - - def setup(self): - block_out_channels = self.block_out_channels - - # z to block_in - self.conv_in = nn.Conv( - block_out_channels[-1], - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - # middle - self.mid_block = FlaxUNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_groups=self.norm_num_groups, - attn_num_head_channels=None, - dtype=self.dtype, - ) - - # upsampling - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - up_blocks = [] - for i, _ in enumerate(self.up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = FlaxUpDecoderBlock2D( - in_channels=prev_output_channel, - out_channels=output_channel, - num_layers=self.layers_per_block + 1, - resnet_groups=self.norm_num_groups, - add_upsample=not is_final_block, - dtype=self.dtype, - ) - up_blocks.append(up_block) - prev_output_channel = output_channel - - self.up_blocks = up_blocks - - # end - self.conv_norm_out = nn.GroupNorm(num_groups=self.norm_num_groups, epsilon=1e-6) - self.conv_out = nn.Conv( - self.out_channels, - kernel_size=(3, 3), - strides=(1, 1), - padding=((1, 1), (1, 1)), - dtype=self.dtype, - ) - - def __call__(self, sample, deterministic: bool = True): - # z to block_in - sample = self.conv_in(sample) - - # middle - sample = self.mid_block(sample, deterministic=deterministic) - - # upsampling - for block in self.up_blocks: - sample = block(sample, deterministic=deterministic) - - sample = self.conv_norm_out(sample) - sample = nn.swish(sample) - sample = self.conv_out(sample) - - return sample - - -class FlaxDiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - # Last axis to account for channels-last - self.mean, self.logvar = jnp.split(parameters, 2, axis=-1) - self.logvar = jnp.clip(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = jnp.exp(0.5 * self.logvar) - self.var = jnp.exp(self.logvar) - if self.deterministic: - self.var = self.std = jnp.zeros_like(self.mean) - - def sample(self, key): - return self.mean + self.std * jax.random.normal(key, self.mean.shape) - - def kl(self, other=None): - if self.deterministic: - return jnp.array([0.0]) - - if other is None: - return 0.5 * jnp.sum(self.mean**2 + self.var - 1.0 - self.logvar, axis=[1, 2, 3]) - - return 0.5 * jnp.sum( - jnp.square(self.mean - other.mean) / other.var + self.var / other.var - 1.0 - self.logvar + other.logvar, - axis=[1, 2, 3], - ) - - def nll(self, sample, axis=[1, 2, 3]): - if self.deterministic: - return jnp.array([0.0]) - - logtwopi = jnp.log(2.0 * jnp.pi) - return 0.5 * jnp.sum(logtwopi + self.logvar + jnp.square(sample - self.mean) / self.var, axis=axis) - - def mode(self): - return self.mean - - -@flax_register_to_config -class FlaxAutoencoderKL(nn.Module, FlaxModelMixin, ConfigMixin): - r""" - Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational - Bayes by Diederik P. Kingma and Max Welling. - - This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - in_channels (:obj:`int`, *optional*, defaults to 3): - Input channels - out_channels (:obj:`int`, *optional*, defaults to 3): - Output channels - down_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(DownEncoderBlock2D)`): - DownEncoder block type - up_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(UpDecoderBlock2D)`): - UpDecoder block type - block_out_channels (:obj:`Tuple[str]`, *optional*, defaults to `(64,)`): - Tuple containing the number of output channels for each block - layers_per_block (:obj:`int`, *optional*, defaults to `2`): - Number of Resnet layer for each block - act_fn (:obj:`str`, *optional*, defaults to `silu`): - Activation function - latent_channels (:obj:`int`, *optional*, defaults to `4`): - Latent space channels - norm_num_groups (:obj:`int`, *optional*, defaults to `32`): - Norm num group - sample_size (:obj:`int`, *optional*, defaults to `32`): - Sample input size - dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): - parameters `dtype` - """ - in_channels: int = 3 - out_channels: int = 3 - down_block_types: Tuple[str] = ("DownEncoderBlock2D",) - up_block_types: Tuple[str] = ("UpDecoderBlock2D",) - block_out_channels: Tuple[int] = (64,) - layers_per_block: int = 1 - act_fn: str = "silu" - latent_channels: int = 4 - norm_num_groups: int = 32 - sample_size: int = 32 - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.encoder = FlaxEncoder( - in_channels=self.config.in_channels, - out_channels=self.config.latent_channels, - down_block_types=self.config.down_block_types, - block_out_channels=self.config.block_out_channels, - layers_per_block=self.config.layers_per_block, - act_fn=self.config.act_fn, - norm_num_groups=self.config.norm_num_groups, - double_z=True, - dtype=self.dtype, - ) - self.decoder = FlaxDecoder( - in_channels=self.config.latent_channels, - out_channels=self.config.out_channels, - up_block_types=self.config.up_block_types, - block_out_channels=self.config.block_out_channels, - layers_per_block=self.config.layers_per_block, - norm_num_groups=self.config.norm_num_groups, - act_fn=self.config.act_fn, - dtype=self.dtype, - ) - self.quant_conv = nn.Conv( - 2 * self.config.latent_channels, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - self.post_quant_conv = nn.Conv( - self.config.latent_channels, - kernel_size=(1, 1), - strides=(1, 1), - padding="VALID", - dtype=self.dtype, - ) - - def init_weights(self, rng: jax.random.PRNGKey) -> FrozenDict: - # init input tensors - sample_shape = (1, self.in_channels, self.sample_size, self.sample_size) - sample = jnp.zeros(sample_shape, dtype=jnp.float32) - - params_rng, dropout_rng, gaussian_rng = jax.random.split(rng, 3) - rngs = {"params": params_rng, "dropout": dropout_rng, "gaussian": gaussian_rng} - - return self.init(rngs, sample)["params"] - - def encode(self, sample, deterministic: bool = True, return_dict: bool = True): - sample = jnp.transpose(sample, (0, 2, 3, 1)) - - hidden_states = self.encoder(sample, deterministic=deterministic) - moments = self.quant_conv(hidden_states) - posterior = FlaxDiagonalGaussianDistribution(moments) - - if not return_dict: - return (posterior,) - - return FlaxAutoencoderKLOutput(latent_dist=posterior) - - def decode(self, latents, deterministic: bool = True, return_dict: bool = True): - if latents.shape[-1] != self.config.latent_channels: - latents = jnp.transpose(latents, (0, 2, 3, 1)) - - hidden_states = self.post_quant_conv(latents) - hidden_states = self.decoder(hidden_states, deterministic=deterministic) - - hidden_states = jnp.transpose(hidden_states, (0, 3, 1, 2)) - - if not return_dict: - return (hidden_states,) - - return FlaxDecoderOutput(sample=hidden_states) - - def __call__(self, sample, sample_posterior=False, deterministic: bool = True, return_dict: bool = True): - posterior = self.encode(sample, deterministic=deterministic, return_dict=return_dict) - if sample_posterior: - rng = self.make_rng("gaussian") - hidden_states = posterior.latent_dist.sample(rng) - else: - hidden_states = posterior.latent_dist.mode() - - sample = self.decode(hidden_states, return_dict=return_dict).sample - - if not return_dict: - return (sample,) - - return FlaxDecoderOutput(sample=sample) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/deformable/deform_conv.h b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/deformable/deform_conv.h deleted file mode 100644 index 965c1bfd47b58f9802d1c3fd69a5962517b2da61..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/csrc/deformable/deform_conv.h +++ /dev/null @@ -1,377 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -#if defined(WITH_CUDA) || defined(WITH_HIP) -int deform_conv_forward_cuda( - at::Tensor input, - at::Tensor weight, - at::Tensor offset, - at::Tensor output, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step); - -int deform_conv_backward_input_cuda( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradInput, - at::Tensor gradOffset, - at::Tensor weight, - at::Tensor columns, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step); - -int deform_conv_backward_parameters_cuda( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - float scale, - int im2col_step); - -void modulated_deform_conv_cuda_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor output, - at::Tensor columns, - int kernel_h, - int kernel_w, - const int stride_h, - const int stride_w, - const int pad_h, - const int pad_w, - const int dilation_h, - const int dilation_w, - const int group, - const int deformable_group, - const bool with_bias); - -void modulated_deform_conv_cuda_backward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor columns, - at::Tensor grad_input, - at::Tensor grad_weight, - at::Tensor grad_bias, - at::Tensor grad_offset, - at::Tensor grad_mask, - at::Tensor grad_output, - int kernel_h, - int kernel_w, - int stride_h, - int stride_w, - int pad_h, - int pad_w, - int dilation_h, - int dilation_w, - int group, - int deformable_group, - const bool with_bias); - -#endif - -inline int deform_conv_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor offset, - at::Tensor output, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_forward_cuda( - input, - weight, - offset, - output, - columns, - ones, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline int deform_conv_backward_input( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradInput, - at::Tensor gradOffset, - at::Tensor weight, - at::Tensor columns, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step) { - if (gradOutput.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_backward_input_cuda( - input, - offset, - gradOutput, - gradInput, - gradOffset, - weight, - columns, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline int deform_conv_backward_filter( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - float scale, - int im2col_step) { - if (gradOutput.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_backward_parameters_cuda( - input, - offset, - gradOutput, - gradWeight, - columns, - ones, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - scale, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline void modulated_deform_conv_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor output, - at::Tensor columns, - int kernel_h, - int kernel_w, - const int stride_h, - const int stride_w, - const int pad_h, - const int pad_w, - const int dilation_h, - const int dilation_w, - const int group, - const int deformable_group, - const bool with_bias) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return modulated_deform_conv_cuda_forward( - input, - weight, - bias, - ones, - offset, - mask, - output, - columns, - kernel_h, - kernel_w, - stride_h, - stride_w, - pad_h, - pad_w, - dilation_h, - dilation_w, - group, - deformable_group, - with_bias); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline void modulated_deform_conv_backward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor columns, - at::Tensor grad_input, - at::Tensor grad_weight, - at::Tensor grad_bias, - at::Tensor grad_offset, - at::Tensor grad_mask, - at::Tensor grad_output, - int kernel_h, - int kernel_w, - int stride_h, - int stride_w, - int pad_h, - int pad_w, - int dilation_h, - int dilation_w, - int group, - int deformable_group, - const bool with_bias) { - if (grad_output.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return modulated_deform_conv_cuda_backward( - input, - weight, - bias, - ones, - offset, - mask, - columns, - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h, - kernel_w, - stride_h, - stride_w, - pad_h, - pad_w, - dilation_h, - dilation_w, - group, - deformable_group, - with_bias); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -} // namespace detectron2 diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/models/networks.py b/spaces/YuxinJ/Scenimefy/Scenimefy/models/networks.py deleted file mode 100644 index b75144df526662be72262f815d866ed8f86c7866..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/models/networks.py +++ /dev/null @@ -1,1513 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import init -import functools -from torch.optim import lr_scheduler -import numpy as np -from Scenimefy.models.stylegan_networks import StyleGAN2Discriminator, StyleGAN2Generator, TileStyleGAN2Discriminator - -############################################################################### -# Helper Functions -############################################################################### - - -def get_filter(filt_size=3): - if(filt_size == 1): - a = np.array([1., ]) - elif(filt_size == 2): - a = np.array([1., 1.]) - elif(filt_size == 3): - a = np.array([1., 2., 1.]) - elif(filt_size == 4): - a = np.array([1., 3., 3., 1.]) - elif(filt_size == 5): - a = np.array([1., 4., 6., 4., 1.]) - elif(filt_size == 6): - a = np.array([1., 5., 10., 10., 5., 1.]) - elif(filt_size == 7): - a = np.array([1., 6., 15., 20., 15., 6., 1.]) - - filt = torch.Tensor(a[:, None] * a[None, :]) - filt = filt / torch.sum(filt) - - return filt - - -class Downsample(nn.Module): - def __init__(self, channels, pad_type='reflect', filt_size=3, stride=2, pad_off=0): - super(Downsample, self).__init__() - self.filt_size = filt_size - self.pad_off = pad_off - self.pad_sizes = [int(1. * (filt_size - 1) / 2), int(np.ceil(1. * (filt_size - 1) / 2)), int(1. * (filt_size - 1) / 2), int(np.ceil(1. * (filt_size - 1) / 2))] - self.pad_sizes = [pad_size + pad_off for pad_size in self.pad_sizes] - self.stride = stride - self.off = int((self.stride - 1) / 2.) - self.channels = channels - - filt = get_filter(filt_size=self.filt_size) - self.register_buffer('filt', filt[None, None, :, :].repeat((self.channels, 1, 1, 1))) - - self.pad = get_pad_layer(pad_type)(self.pad_sizes) - - def forward(self, inp): - if(self.filt_size == 1): - if(self.pad_off == 0): - return inp[:, :, ::self.stride, ::self.stride] - else: - return self.pad(inp)[:, :, ::self.stride, ::self.stride] - else: - return F.conv2d(self.pad(inp), self.filt, stride=self.stride, groups=inp.shape[1]) - - -class Upsample2(nn.Module): - def __init__(self, scale_factor, mode='nearest'): - super().__init__() - self.factor = scale_factor - self.mode = mode - - def forward(self, x): - return torch.nn.functional.interpolate(x, scale_factor=self.factor, mode=self.mode) - - -class Upsample(nn.Module): - def __init__(self, channels, pad_type='repl', filt_size=4, stride=2): - super(Upsample, self).__init__() - self.filt_size = filt_size - self.filt_odd = np.mod(filt_size, 2) == 1 - self.pad_size = int((filt_size - 1) / 2) - self.stride = stride - self.off = int((self.stride - 1) / 2.) - self.channels = channels - - filt = get_filter(filt_size=self.filt_size) * (stride**2) - self.register_buffer('filt', filt[None, None, :, :].repeat((self.channels, 1, 1, 1))) - - self.pad = get_pad_layer(pad_type)([1, 1, 1, 1]) - - def forward(self, inp): - ret_val = F.conv_transpose2d(self.pad(inp), self.filt, stride=self.stride, padding=1 + self.pad_size, groups=inp.shape[1])[:, :, 1:, 1:] - if(self.filt_odd): - return ret_val - else: - return ret_val[:, :, :-1, :-1] - - -def get_pad_layer(pad_type): - if(pad_type in ['refl', 'reflect']): - PadLayer = nn.ReflectionPad2d - elif(pad_type in ['repl', 'replicate']): - PadLayer = nn.ReplicationPad2d - elif(pad_type == 'zero'): - PadLayer = nn.ZeroPad2d - else: - print('Pad type [%s] not recognized' % pad_type) - return PadLayer - - -class Identity(nn.Module): - def forward(self, x): - return x - - -def get_norm_layer(norm_type='instance'): - """Return a normalization layer - - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == 'none': - def norm_layer(x): - return Identity() - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For 'linear', we keep the same learning rate for the first epochs - and linearly decay the rate to zero over the next epochs. - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def init_weights(net, init_type='normal', init_gain=0.02, debug=False): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might - work better for some applications. Feel free to try yourself. - """ - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if debug: - print(classname) - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[], debug=False, initialize_weights=True): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - if len(gpu_ids) > 0: - assert(torch.cuda.is_available()) - net.to(gpu_ids[0]) - # if not amp: - # net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs for non-AMP training - if initialize_weights: - init_weights(net, init_type, init_gain=init_gain, debug=debug) - return net - - -def define_G(input_nc, output_nc, ngf, netG, norm='batch', use_dropout=False, init_type='normal', - init_gain=0.02, no_antialias=False, no_antialias_up=False, gpu_ids=[], opt=None): - """Create a generator - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128 - norm (str) -- the name of normalization layers used in the network: batch | instance | none - use_dropout (bool) -- if use dropout layers. - init_type (str) -- the name of our initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a generator - - Our current implementation provides two types of generators: - U-Net: [unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images) - The original U-Net paper: https://arxiv.org/abs/1505.04597 - - Resnet-based generator: [resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks) - Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations. - We adapt Torch code from Justin Johnson's neural style transfer project (https://github.com/jcjohnson/fast-neural-style). - - - The generator has been initialized by . It uses RELU for non-linearity. - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - if netG == 'resnet_9blocks': - net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, no_antialias=no_antialias, no_antialias_up=no_antialias_up, n_blocks=9, opt=opt) - elif netG == 'resnet_6blocks': - net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, no_antialias=no_antialias, no_antialias_up=no_antialias_up, n_blocks=6, opt=opt) - elif netG == 'resnet_4blocks': - net = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, no_antialias=no_antialias, no_antialias_up=no_antialias_up, n_blocks=4, opt=opt) - elif netG == 'unet_128': - net = UnetGenerator(input_nc, output_nc, 7, ngf, norm_layer=norm_layer, use_dropout=use_dropout) - elif netG == 'unet_256': - net = UnetGenerator(input_nc, output_nc, 8, ngf, norm_layer=norm_layer, use_dropout=use_dropout) - elif netG == 'stylegan2': - net = StyleGAN2Generator(input_nc, output_nc, ngf, use_dropout=use_dropout, opt=opt) - elif netG == 'smallstylegan2': - net = StyleGAN2Generator(input_nc, output_nc, ngf, use_dropout=use_dropout, n_blocks=2, opt=opt) - elif netG == 'resnet_cat': - n_blocks = 8 - net = G_Resnet(input_nc, output_nc, opt.nz, num_downs=2, n_res=n_blocks - 4, ngf=ngf, norm='inst', nl_layer='relu') - else: - raise NotImplementedError('Generator model name [%s] is not recognized' % netG) - return init_net(net, init_type, init_gain, gpu_ids, initialize_weights=('stylegan2' not in netG)) - - -def define_F(input_nc, netF, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, no_antialias=False, gpu_ids=[], opt=None): - if netF == 'global_pool': - net = PoolingF() - elif netF == 'reshape': - net = ReshapeF() - elif netF == 'sample': - net = PatchSampleF(use_mlp=False, init_type=init_type, init_gain=init_gain, gpu_ids=gpu_ids, nc=opt.netF_nc) - elif netF == 'mlp_sample': - net = PatchSampleF(use_mlp=True, init_type=init_type, init_gain=init_gain, gpu_ids=gpu_ids, nc=opt.netF_nc) - elif netF == 'strided_conv': - net = StridedConvF(init_type=init_type, init_gain=init_gain, gpu_ids=gpu_ids) - else: - raise NotImplementedError('projection model name [%s] is not recognized' % netF) - return init_net(net, init_type, init_gain, gpu_ids) - -def define_D(input_nc, ndf, netD, n_layers_D=3, norm='batch', init_type='normal', init_gain=0.02, no_antialias=False, gpu_ids=[], opt=None): - """Create a discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the first conv layer - netD (str) -- the architecture's name: basic | n_layers | pixel - n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers' - norm (str) -- the type of normalization layers used in the network. - init_type (str) -- the name of the initialization method. - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Returns a discriminator - - Our current implementation provides three types of discriminators: - [basic]: 'PatchGAN' classifier described in the original pix2pix paper. - It can classify whether 70×70 overlapping patches are real or fake. - Such a patch-level discriminator architecture has fewer parameters - than a full-image discriminator and can work on arbitrarily-sized images - in a fully convolutional fashion. - - [n_layers]: With this mode, you cna specify the number of conv layers in the discriminator - with the parameter (default=3 as used in [basic] (PatchGAN).) - - [pixel]: 1x1 PixelGAN discriminator can classify whether a pixel is real or not. - It encourages greater color diversity but has no effect on spatial statistics. - - The discriminator has been initialized by . It uses Leaky RELU for non-linearity. - """ - net = None - norm_layer = get_norm_layer(norm_type=norm) - - if netD == 'basic': # default PatchGAN classifier - net = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer, no_antialias=no_antialias,) - elif netD == 'n_layers': # more options - net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer, no_antialias=no_antialias,) - elif netD == 'pixel': # classify if each pixel is real or fake - net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer) - elif 'stylegan2' in netD: - net = StyleGAN2Discriminator(input_nc, ndf, n_layers_D, no_antialias=no_antialias, opt=opt) - else: - raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD) - return init_net(net, init_type, init_gain, gpu_ids, - initialize_weights=('stylegan2' not in netD)) - - -############################################################################## -# Classes -############################################################################## -class GANLoss(nn.Module): - """Define different GAN objectives. - - The GANLoss class abstracts away the need to create the target label tensor - that has the same size as the input. - """ - - def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0): - """ Initialize the GANLoss class. - - Parameters: - gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp. - target_real_label (bool) - - label for a real image - target_fake_label (bool) - - label of a fake image - - Note: Do not use sigmoid as the last layer of Discriminator. - LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss. - """ - super(GANLoss, self).__init__() - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - self.gan_mode = gan_mode - if gan_mode == 'lsgan': - self.loss = nn.MSELoss() - elif gan_mode == 'vanilla': - self.loss = nn.BCEWithLogitsLoss() - elif gan_mode in ['wgangp', 'nonsaturating']: - self.loss = None - else: - raise NotImplementedError('gan mode %s not implemented' % gan_mode) - - def get_target_tensor(self, prediction, target_is_real): - """Create label tensors with the same size as the input. - - Parameters: - prediction (tensor) - - tpyically the prediction from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - A label tensor filled with ground truth label, and with the size of the input - """ - - if target_is_real: - target_tensor = self.real_label - else: - target_tensor = self.fake_label - return target_tensor.expand_as(prediction) - - def __call__(self, prediction, target_is_real): - """Calculate loss given Discriminator's output and grount truth labels. - - Parameters: - prediction (tensor) - - tpyically the prediction output from a discriminator - target_is_real (bool) - - if the ground truth label is for real images or fake images - - Returns: - the calculated loss. - """ - bs = prediction.size(0) - if self.gan_mode in ['lsgan', 'vanilla']: - target_tensor = self.get_target_tensor(prediction, target_is_real) - loss = self.loss(prediction, target_tensor) - elif self.gan_mode == 'wgangp': - if target_is_real: - loss = -prediction.mean() - else: - loss = prediction.mean() - elif self.gan_mode == 'nonsaturating': - if target_is_real: - loss = F.softplus(-prediction).view(bs, -1).mean(dim=1) - else: - loss = F.softplus(prediction).view(bs, -1).mean(dim=1) - return loss - - -def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0): - """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028 - - Arguments: - netD (network) -- discriminator network - real_data (tensor array) -- real images - fake_data (tensor array) -- generated images from the generator - device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') - type (str) -- if we mix real and fake data or not [real | fake | mixed]. - constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2 - lambda_gp (float) -- weight for this loss - - Returns the gradient penalty loss - """ - if lambda_gp > 0.0: - if type == 'real': # either use real images, fake images, or a linear interpolation of two. - interpolatesv = real_data - elif type == 'fake': - interpolatesv = fake_data - elif type == 'mixed': - alpha = torch.rand(real_data.shape[0], 1, device=device) - alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view(*real_data.shape) - interpolatesv = alpha * real_data + ((1 - alpha) * fake_data) - else: - raise NotImplementedError('{} not implemented'.format(type)) - interpolatesv.requires_grad_(True) - disc_interpolates = netD(interpolatesv) - gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv, - grad_outputs=torch.ones(disc_interpolates.size()).to(device), - create_graph=True, retain_graph=True, only_inputs=True) - gradients = gradients[0].view(real_data.size(0), -1) # flat the data - gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps - return gradient_penalty, gradients - else: - return 0.0, None - - -class Normalize(nn.Module): - - def __init__(self, power=2): - super(Normalize, self).__init__() - self.power = power - - def forward(self, x): - norm = x.pow(self.power).sum(1, keepdim=True).pow(1. / self.power) - out = x.div(norm + 1e-7) - return out - - -class PoolingF(nn.Module): - def __init__(self): - super(PoolingF, self).__init__() - model = [nn.AdaptiveMaxPool2d(1)] - self.model = nn.Sequential(*model) - self.l2norm = Normalize(2) - - def forward(self, x): - return self.l2norm(self.model(x)) - - -class ReshapeF(nn.Module): - def __init__(self): - super(ReshapeF, self).__init__() - model = [nn.AdaptiveAvgPool2d(4)] - self.model = nn.Sequential(*model) - self.l2norm = Normalize(2) - - def forward(self, x): - x = self.model(x) - x_reshape = x.permute(0, 2, 3, 1).flatten(0, 2) - return self.l2norm(x_reshape) - - -class StridedConvF(nn.Module): - def __init__(self, init_type='normal', init_gain=0.02, gpu_ids=[]): - super().__init__() - # self.conv1 = nn.Conv2d(256, 128, 3, stride=2) - # self.conv2 = nn.Conv2d(128, 64, 3, stride=1) - self.l2_norm = Normalize(2) - self.mlps = {} - self.moving_averages = {} - self.init_type = init_type - self.init_gain = init_gain - self.gpu_ids = gpu_ids - - def create_mlp(self, x): - C, H = x.shape[1], x.shape[2] - n_down = int(np.rint(np.log2(H / 32))) - mlp = [] - for i in range(n_down): - mlp.append(nn.Conv2d(C, max(C // 2, 64), 3, stride=2)) - mlp.append(nn.ReLU()) - C = max(C // 2, 64) - mlp.append(nn.Conv2d(C, 64, 3)) - mlp = nn.Sequential(*mlp) - init_net(mlp, self.init_type, self.init_gain, self.gpu_ids) - return mlp - - def update_moving_average(self, key, x): - if key not in self.moving_averages: - self.moving_averages[key] = x.detach() - - self.moving_averages[key] = self.moving_averages[key] * 0.999 + x.detach() * 0.001 - - def forward(self, x, use_instance_norm=False): - C, H = x.shape[1], x.shape[2] - key = '%d_%d' % (C, H) - if key not in self.mlps: - self.mlps[key] = self.create_mlp(x) - self.add_module("child_%s" % key, self.mlps[key]) - mlp = self.mlps[key] - x = mlp(x) - self.update_moving_average(key, x) - x = x - self.moving_averages[key] - if use_instance_norm: - x = F.instance_norm(x) - return self.l2_norm(x) - - -class PatchSampleF(nn.Module): - def __init__(self, use_mlp=False, init_type='normal', init_gain=0.02, nc=256, gpu_ids=[]): - # potential issues: currently, we use the same patch_ids for multiple images in the batch - super(PatchSampleF, self).__init__() - self.l2norm = Normalize(2) - self.use_mlp = use_mlp - self.nc = nc # hard-coded - self.mlp_init = False - self.init_type = init_type - self.init_gain = init_gain - self.gpu_ids = gpu_ids - - def create_mlp(self, feats): - for mlp_id, feat in enumerate(feats): - input_nc = feat.shape[1] - mlp = nn.Sequential(*[nn.Linear(input_nc, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]) - if len(self.gpu_ids) > 0: - mlp.cuda() - setattr(self, 'mlp_%d' % mlp_id, mlp) - init_net(self, self.init_type, self.init_gain, self.gpu_ids) - self.mlp_init = True - - def forward(self, feats, num_patches=64, patch_ids=None): - return_ids = [] - return_feats = [] - if self.use_mlp and not self.mlp_init: - self.create_mlp(feats) - for feat_id, feat in enumerate(feats): - B, H, W = feat.shape[0], feat.shape[2], feat.shape[3] - feat_reshape = feat.permute(0, 2, 3, 1).flatten(1, 2) - if num_patches > 0: - if patch_ids is not None: - patch_id = patch_ids[feat_id] - else: - patch_id = torch.randperm(feat_reshape.shape[1], device=feats[0].device) - patch_id = patch_id[:int(min(num_patches, patch_id.shape[0]))] # .to(patch_ids.device) - x_sample = feat_reshape[:, patch_id, :].flatten(0, 1) # reshape(-1, x.shape[1]) - else: - x_sample = feat_reshape - patch_id = [] - if self.use_mlp: - mlp = getattr(self, 'mlp_%d' % feat_id) - x_sample = mlp(x_sample) - return_ids.append(patch_id) - x_sample = self.l2norm(x_sample) - - if num_patches == 0: - x_sample = x_sample.permute(0, 2, 1).reshape([B, x_sample.shape[-1], H, W]) - return_feats.append(x_sample) - return return_feats, return_ids - - -class G_Resnet(nn.Module): - def __init__(self, input_nc, output_nc, nz, num_downs, n_res, ngf=64, - norm=None, nl_layer=None): - super(G_Resnet, self).__init__() - n_downsample = num_downs - pad_type = 'reflect' - self.enc_content = ContentEncoder(n_downsample, n_res, input_nc, ngf, norm, nl_layer, pad_type=pad_type) - if nz == 0: - self.dec = Decoder(n_downsample, n_res, self.enc_content.output_dim, output_nc, norm=norm, activ=nl_layer, pad_type=pad_type, nz=nz) - else: - self.dec = Decoder_all(n_downsample, n_res, self.enc_content.output_dim, output_nc, norm=norm, activ=nl_layer, pad_type=pad_type, nz=nz) - - def decode(self, content, style=None): - return self.dec(content, style) - - def forward(self, image, style=None, nce_layers=[], encode_only=False): - content, feats = self.enc_content(image, nce_layers=nce_layers, encode_only=encode_only) - if encode_only: - return feats - else: - images_recon = self.decode(content, style) - if len(nce_layers) > 0: - return images_recon, feats - else: - return images_recon - -################################################################################## -# Encoder and Decoders -################################################################################## - - -class E_adaIN(nn.Module): - def __init__(self, input_nc, output_nc=1, nef=64, n_layers=4, - norm=None, nl_layer=None, vae=False): - # style encoder - super(E_adaIN, self).__init__() - self.enc_style = StyleEncoder(n_layers, input_nc, nef, output_nc, norm='none', activ='relu', vae=vae) - - def forward(self, image): - style = self.enc_style(image) - return style - - -class StyleEncoder(nn.Module): - def __init__(self, n_downsample, input_dim, dim, style_dim, norm, activ, vae=False): - super(StyleEncoder, self).__init__() - self.vae = vae - self.model = [] - self.model += [Conv2dBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type='reflect')] - for i in range(2): - self.model += [Conv2dBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type='reflect')] - dim *= 2 - for i in range(n_downsample - 2): - self.model += [Conv2dBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type='reflect')] - self.model += [nn.AdaptiveAvgPool2d(1)] # global average pooling - if self.vae: - self.fc_mean = nn.Linear(dim, style_dim) # , 1, 1, 0) - self.fc_var = nn.Linear(dim, style_dim) # , 1, 1, 0) - else: - self.model += [nn.Conv2d(dim, style_dim, 1, 1, 0)] - - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def forward(self, x): - if self.vae: - output = self.model(x) - output = output.view(x.size(0), -1) - output_mean = self.fc_mean(output) - output_var = self.fc_var(output) - return output_mean, output_var - else: - return self.model(x).view(x.size(0), -1) - - -class ContentEncoder(nn.Module): - def __init__(self, n_downsample, n_res, input_dim, dim, norm, activ, pad_type='zero'): - super(ContentEncoder, self).__init__() - self.model = [] - self.model += [Conv2dBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type='reflect')] - # downsampling blocks - for i in range(n_downsample): - self.model += [Conv2dBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type='reflect')] - dim *= 2 - # residual blocks - self.model += [ResBlocks(n_res, dim, norm=norm, activation=activ, pad_type=pad_type)] - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def forward(self, x, nce_layers=[], encode_only=False): - if len(nce_layers) > 0: - feat = x - feats = [] - for layer_id, layer in enumerate(self.model): - feat = layer(feat) - if layer_id in nce_layers: - feats.append(feat) - if layer_id == nce_layers[-1] and encode_only: - return None, feats - return feat, feats - else: - return self.model(x), None - - for layer_id, layer in enumerate(self.model): - print(layer_id, layer) - - -class Decoder_all(nn.Module): - def __init__(self, n_upsample, n_res, dim, output_dim, norm='batch', activ='relu', pad_type='zero', nz=0): - super(Decoder_all, self).__init__() - # AdaIN residual blocks - self.resnet_block = ResBlocks(n_res, dim, norm, activ, pad_type=pad_type, nz=nz) - self.n_blocks = 0 - # upsampling blocks - for i in range(n_upsample): - block = [Upsample2(scale_factor=2), Conv2dBlock(dim + nz, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type='reflect')] - setattr(self, 'block_{:d}'.format(self.n_blocks), nn.Sequential(*block)) - self.n_blocks += 1 - dim //= 2 - # use reflection padding in the last conv layer - setattr(self, 'block_{:d}'.format(self.n_blocks), Conv2dBlock(dim + nz, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type='reflect')) - self.n_blocks += 1 - - def forward(self, x, y=None): - if y is not None: - output = self.resnet_block(cat_feature(x, y)) - for n in range(self.n_blocks): - block = getattr(self, 'block_{:d}'.format(n)) - if n > 0: - output = block(cat_feature(output, y)) - else: - output = block(output) - return output - - -class Decoder(nn.Module): - def __init__(self, n_upsample, n_res, dim, output_dim, norm='batch', activ='relu', pad_type='zero', nz=0): - super(Decoder, self).__init__() - - self.model = [] - # AdaIN residual blocks - self.model += [ResBlocks(n_res, dim, norm, activ, pad_type=pad_type, nz=nz)] - # upsampling blocks - for i in range(n_upsample): - if i == 0: - input_dim = dim + nz - else: - input_dim = dim - self.model += [Upsample2(scale_factor=2), Conv2dBlock(input_dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type='reflect')] - dim //= 2 - # use reflection padding in the last conv layer - self.model += [Conv2dBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type='reflect')] - self.model = nn.Sequential(*self.model) - - def forward(self, x, y=None): - if y is not None: - return self.model(cat_feature(x, y)) - else: - return self.model(x) - -################################################################################## -# Sequential Models -################################################################################## - - -class ResBlocks(nn.Module): - def __init__(self, num_blocks, dim, norm='inst', activation='relu', pad_type='zero', nz=0): - super(ResBlocks, self).__init__() - self.model = [] - for i in range(num_blocks): - self.model += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type, nz=nz)] - self.model = nn.Sequential(*self.model) - - def forward(self, x): - return self.model(x) - - -################################################################################## -# Basic Blocks -################################################################################## -def cat_feature(x, y): - y_expand = y.view(y.size(0), y.size(1), 1, 1).expand( - y.size(0), y.size(1), x.size(2), x.size(3)) - x_cat = torch.cat([x, y_expand], 1) - return x_cat - - -class ResBlock(nn.Module): - def __init__(self, dim, norm='inst', activation='relu', pad_type='zero', nz=0): - super(ResBlock, self).__init__() - - model = [] - model += [Conv2dBlock(dim + nz, dim, 3, 1, 1, norm=norm, activation=activation, pad_type=pad_type)] - model += [Conv2dBlock(dim, dim + nz, 3, 1, 1, norm=norm, activation='none', pad_type=pad_type)] - self.model = nn.Sequential(*model) - - def forward(self, x): - residual = x - out = self.model(x) - out += residual - return out - - -class Conv2dBlock(nn.Module): - def __init__(self, input_dim, output_dim, kernel_size, stride, - padding=0, norm='none', activation='relu', pad_type='zero'): - super(Conv2dBlock, self).__init__() - self.use_bias = True - # initialize padding - if pad_type == 'reflect': - self.pad = nn.ReflectionPad2d(padding) - elif pad_type == 'zero': - self.pad = nn.ZeroPad2d(padding) - else: - assert 0, "Unsupported padding type: {}".format(pad_type) - - # initialize normalization - norm_dim = output_dim - if norm == 'batch': - self.norm = nn.BatchNorm2d(norm_dim) - elif norm == 'inst': - self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=False) - elif norm == 'ln': - self.norm = LayerNorm(norm_dim) - elif norm == 'none': - self.norm = None - else: - assert 0, "Unsupported normalization: {}".format(norm) - - # initialize activation - if activation == 'relu': - self.activation = nn.ReLU(inplace=True) - elif activation == 'lrelu': - self.activation = nn.LeakyReLU(0.2, inplace=True) - elif activation == 'prelu': - self.activation = nn.PReLU() - elif activation == 'selu': - self.activation = nn.SELU(inplace=True) - elif activation == 'tanh': - self.activation = nn.Tanh() - elif activation == 'none': - self.activation = None - else: - assert 0, "Unsupported activation: {}".format(activation) - - # initialize convolution - self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias) - - def forward(self, x): - x = self.conv(self.pad(x)) - if self.norm: - x = self.norm(x) - if self.activation: - x = self.activation(x) - return x - - -class LinearBlock(nn.Module): - def __init__(self, input_dim, output_dim, norm='none', activation='relu'): - super(LinearBlock, self).__init__() - use_bias = True - # initialize fully connected layer - self.fc = nn.Linear(input_dim, output_dim, bias=use_bias) - - # initialize normalization - norm_dim = output_dim - if norm == 'batch': - self.norm = nn.BatchNorm1d(norm_dim) - elif norm == 'inst': - self.norm = nn.InstanceNorm1d(norm_dim) - elif norm == 'ln': - self.norm = LayerNorm(norm_dim) - elif norm == 'none': - self.norm = None - else: - assert 0, "Unsupported normalization: {}".format(norm) - - # initialize activation - if activation == 'relu': - self.activation = nn.ReLU(inplace=True) - elif activation == 'lrelu': - self.activation = nn.LeakyReLU(0.2, inplace=True) - elif activation == 'prelu': - self.activation = nn.PReLU() - elif activation == 'selu': - self.activation = nn.SELU(inplace=True) - elif activation == 'tanh': - self.activation = nn.Tanh() - elif activation == 'none': - self.activation = None - else: - assert 0, "Unsupported activation: {}".format(activation) - - def forward(self, x): - out = self.fc(x) - if self.norm: - out = self.norm(out) - if self.activation: - out = self.activation(out) - return out - -################################################################################## -# Normalization layers -################################################################################## - - -class LayerNorm(nn.Module): - def __init__(self, num_features, eps=1e-5, affine=True): - super(LayerNorm, self).__init__() - self.num_features = num_features - self.affine = affine - self.eps = eps - - if self.affine: - self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_()) - self.beta = nn.Parameter(torch.zeros(num_features)) - - def forward(self, x): - shape = [-1] + [1] * (x.dim() - 1) - mean = x.view(x.size(0), -1).mean(1).view(*shape) - std = x.view(x.size(0), -1).std(1).view(*shape) - x = (x - mean) / (std + self.eps) - - if self.affine: - shape = [1, -1] + [1] * (x.dim() - 2) - x = x * self.gamma.view(*shape) + self.beta.view(*shape) - return x - - -class ResnetGenerator(nn.Module): - """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations. - - We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style) - """ - - def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, no_antialias_up=False, opt=None): - """Construct a Resnet-based generator - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers - n_blocks (int) -- the number of ResNet blocks - padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero - """ - assert(n_blocks >= 0) - super(ResnetGenerator, self).__init__() - self.opt = opt - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - model = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias), - norm_layer(ngf), - nn.ReLU(True)] - - n_downsampling = 2 - for i in range(n_downsampling): # add downsampling layers - mult = 2 ** i - if(no_antialias): - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True)] - else: - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=1, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True), - Downsample(ngf * mult * 2)] - - mult = 2 ** n_downsampling - for i in range(n_blocks): # add ResNet blocks - - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] - - for i in range(n_downsampling): # add upsampling layers - mult = 2 ** (n_downsampling - i) - if no_antialias_up: - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), - kernel_size=3, stride=2, - padding=1, output_padding=1, - bias=use_bias), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True)] - else: - model += [Upsample(ngf * mult), - nn.Conv2d(ngf * mult, int(ngf * mult / 2), - kernel_size=3, stride=1, - padding=1, # output_padding=1, - bias=use_bias), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True)] - model += [nn.ReflectionPad2d(3)] - model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - model += [nn.Tanh()] - - self.model = nn.Sequential(*model) - - def forward(self, input, layers=[], encode_only=False,mode='all',stop_layer=16): - if -1 in layers: - layers.append(len(self.model)) - if len(layers) > 0: - feat = input - feats = [] - for layer_id, layer in enumerate(self.model): - # print(layer_id, layer) - feat = layer(feat) - if layer_id in layers: - # print("%d: adding the output of %s %d" % (layer_id, layer.__class__.__name__, feat.size(1))) - feats.append(feat) - else: - # print("%d: skipping %s %d" % (layer_id, layer.__class__.__name__, feat.size(1))) - pass - if layer_id == layers[-1] and encode_only: - # print('encoder only return features') - return feats # return intermediate features alone; stop in the last layers - - return feat, feats # return both output and intermediate features - else: - """Standard forward""" - if mode=='encoder': - feat=input - for layer_id, layer in enumerate(self.model): - feat = layer(feat) - if layer_id == stop_layer: - # print('encoder only return features') - return feat # return intermediate features alone; stop in the last layers - elif mode =='decoder': - feat=input - for layer_id, layer in enumerate(self.model): - - if layer_id > stop_layer: - feat = layer(feat) - else: - pass - # print('encoder only return features') - return feat # return intermediate features alone; stop in the last layers - else: - fake = self.model(input) - return fake - -# class ResnetGenerator(nn.Module): -# """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations. - -# We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style) -# """ - -# def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, no_antialias_up=False, opt=None): -# """Construct a Resnet-based generator - -# Parameters: -# input_nc (int) -- the number of channels in input images -# output_nc (int) -- the number of channels in output images -# ngf (int) -- the number of filters in the last conv layer -# norm_layer -- normalization layer -# use_dropout (bool) -- if use dropout layers -# n_blocks (int) -- the number of ResNet blocks -# padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero -# """ -# assert(n_blocks >= 0) -# super(ResnetGenerator, self).__init__() -# self.opt = opt -# if type(norm_layer) == functools.partial: -# use_bias = norm_layer.func == nn.InstanceNorm2d -# else: -# use_bias = norm_layer == nn.InstanceNorm2d - -# model = [nn.ReflectionPad2d(3), -# nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias), -# norm_layer(ngf), -# nn.ReLU(True)] - -# n_downsampling = 2 -# for i in range(n_downsampling): # add downsampling layers -# mult = 2 ** i -# if(no_antialias): -# model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias), -# norm_layer(ngf * mult * 2), -# nn.ReLU(True)] -# else: -# model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=1, padding=1, bias=use_bias), -# norm_layer(ngf * mult * 2), -# nn.ReLU(True), -# Downsample(ngf * mult * 2)] - -# mult = 2 ** n_downsampling -# for i in range(n_blocks): # add ResNet blocks - -# model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] - -# for i in range(n_downsampling): # add upsampling layers -# mult = 2 ** (n_downsampling - i) -# if no_antialias_up: -# model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), -# kernel_size=3, stride=2, -# padding=1, output_padding=1, -# bias=use_bias), -# norm_layer(int(ngf * mult / 2)), -# nn.ReLU(True)] -# else: -# model += [Upsample(ngf * mult), -# nn.Conv2d(ngf * mult, int(ngf * mult / 2), -# kernel_size=3, stride=1, -# padding=1, # output_padding=1, -# bias=use_bias), -# norm_layer(int(ngf * mult / 2)), -# nn.ReLU(True)] -# model += [nn.ReflectionPad2d(3)] -# model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] -# model += [nn.Tanh()] - -# self.model = nn.Sequential(*model) - -# def forward(self, input, layers=[], encode_only=False): -# if -1 in layers: -# layers.append(len(self.model)) -# if len(layers) > 0: -# feat = input -# feats = [] -# for layer_id, layer in enumerate(self.model): -# # print(layer_id, layer) -# feat = layer(feat) -# if layer_id in layers: -# # print("%d: adding the output of %s %d" % (layer_id, layer.__class__.__name__, feat.size(1))) -# feats.append(feat) -# else: -# # print("%d: skipping %s %d" % (layer_id, layer.__class__.__name__, feat.size(1))) -# pass -# if layer_id == layers[-1] and encode_only: -# # print('encoder only return features') -# return feats # return intermediate features alone; stop in the last layers - -# return feat, feats # return both output and intermediate features -# else: -# """Standard forward""" -# fake = self.model(input) -# return fake - -class ResnetDecoder(nn.Module): - """Resnet-based decoder that consists of a few Resnet blocks + a few upsampling operations. - """ - - def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False): - """Construct a Resnet-based decoder - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers - n_blocks (int) -- the number of ResNet blocks - padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero - """ - assert(n_blocks >= 0) - super(ResnetDecoder, self).__init__() - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - model = [] - n_downsampling = 2 - mult = 2 ** n_downsampling - for i in range(n_blocks): # add ResNet blocks - - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] - - for i in range(n_downsampling): # add upsampling layers - mult = 2 ** (n_downsampling - i) - if(no_antialias): - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), - kernel_size=3, stride=2, - padding=1, output_padding=1, - bias=use_bias), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True)] - else: - model += [Upsample(ngf * mult), - nn.Conv2d(ngf * mult, int(ngf * mult / 2), - kernel_size=3, stride=1, - padding=1, - bias=use_bias), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True)] - model += [nn.ReflectionPad2d(3)] - model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - model += [nn.Tanh()] - - self.model = nn.Sequential(*model) - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class ResnetEncoder(nn.Module): - """Resnet-based encoder that consists of a few downsampling + several Resnet blocks - """ - - def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False): - """Construct a Resnet-based encoder - - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers - n_blocks (int) -- the number of ResNet blocks - padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero - """ - assert(n_blocks >= 0) - super(ResnetEncoder, self).__init__() - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - model = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias), - norm_layer(ngf), - nn.ReLU(True)] - - n_downsampling = 2 - for i in range(n_downsampling): # add downsampling layers - mult = 2 ** i - if(no_antialias): - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True)] - else: - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=1, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True), - Downsample(ngf * mult * 2)] - - mult = 2 ** n_downsampling - for i in range(n_blocks): # add ResNet blocks - - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)] - - self.model = nn.Sequential(*model) - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class ResnetBlock(nn.Module): - """Define a Resnet block""" - - def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias): - """Initialize the Resnet block - - A resnet block is a conv block with skip connections - We construct a conv block with build_conv_block function, - and implement skip connections in function. - Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf - """ - super(ResnetBlock, self).__init__() - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias) - - def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias): - """Construct a convolutional block. - - Parameters: - dim (int) -- the number of channels in the conv layer. - padding_type (str) -- the name of padding layer: reflect | replicate | zero - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - use_bias (bool) -- if the conv layer uses bias or not - - Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU)) - """ - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - """Forward function (with skip connections)""" - out = x + self.conv_block(x) # add skip connections - return out - - -class UnetGenerator(nn.Module): - """Create a Unet-based generator""" - - def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet generator - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7, - image of size 128x128 will become of size 1x1 # at the bottleneck - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - - We construct the U-Net from the innermost layer to the outermost layer. - It is a recursive process. - """ - super(UnetGenerator, self).__init__() - # construct unet structure - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer - for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout) - # gradually reduce the number of filters from ngf * 8 to ngf - unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - self.model = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer - - def forward(self, input): - """Standard forward""" - return self.model(input) - - -class UnetSkipConnectionBlock(nn.Module): - """Defines the Unet submodule with skip connection. - X -------------------identity---------------------- - |-- downsampling -- |submodule| -- upsampling --| - """ - - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet submodule with skip connections. - - Parameters: - outer_nc (int) -- the number of filters in the outer conv layer - inner_nc (int) -- the number of filters in the inner conv layer - input_nc (int) -- the number of channels in input images/features - submodule (UnetSkipConnectionBlock) -- previously defined submodules - outermost (bool) -- if this module is the outermost module - innermost (bool) -- if this module is the innermost module - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - """ - super(UnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4, - stride=2, padding=1, bias=use_bias) - downrelu = nn.LeakyReLU(0.2, True) - downnorm = norm_layer(inner_nc) - uprelu = nn.ReLU(True) - upnorm = norm_layer(outer_nc) - - if outermost: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1) - down = [downconv] - up = [uprelu, upconv, nn.Tanh()] - model = down + [submodule] + up - elif innermost: - upconv = nn.ConvTranspose2d(inner_nc, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv] - up = [uprelu, upconv, upnorm] - model = down + up - else: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv, downnorm] - up = [uprelu, upconv, upnorm] - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: # add skip connections - return torch.cat([x, self.model(x)], 1) - - -class NLayerDiscriminator(nn.Module): - """Defines a PatchGAN discriminator""" - - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, no_antialias=False): - """Construct a PatchGAN discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - n_layers (int) -- the number of conv layers in the discriminator - norm_layer -- normalization layer - """ - super(NLayerDiscriminator, self).__init__() - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - kw = 4 - padw = 1 - if(no_antialias): - sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)] - else: - sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=1, padding=padw), nn.LeakyReLU(0.2, True), Downsample(ndf)] - nf_mult = 1 - nf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - nf_mult_prev = nf_mult - nf_mult = min(2 ** n, 8) - if(no_antialias): - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - else: - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True), - Downsample(ndf * nf_mult)] - - nf_mult_prev = nf_mult - nf_mult = min(2 ** n_layers, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map - self.model = nn.Sequential(*sequence) - - def forward(self, input): - """Standard forward.""" - return self.model(input) - - -class PixelDiscriminator(nn.Module): - """Defines a 1x1 PatchGAN discriminator (pixelGAN)""" - - def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d): - """Construct a 1x1 PatchGAN discriminator - - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - """ - super(PixelDiscriminator, self).__init__() - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - self.net = [ - nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0), - nn.LeakyReLU(0.2, True), - nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias), - norm_layer(ndf * 2), - nn.LeakyReLU(0.2, True), - nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)] - - self.net = nn.Sequential(*self.net) - - def forward(self, input): - """Standard forward.""" - return self.net(input) - - -class PatchDiscriminator(NLayerDiscriminator): - """Defines a PatchGAN discriminator""" - - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, no_antialias=False): - super().__init__(input_nc, ndf, 2, norm_layer, no_antialias) - - def forward(self, input): - B, C, H, W = input.size(0), input.size(1), input.size(2), input.size(3) - size = 16 - Y = H // size - X = W // size - input = input.view(B, C, Y, size, X, size) - input = input.permute(0, 2, 4, 1, 3, 5).contiguous().view(B * Y * X, C, size, size) - return super().forward(input) - - -class GroupedChannelNorm(nn.Module): - def __init__(self, num_groups): - super().__init__() - self.num_groups = num_groups - - def forward(self, x): - shape = list(x.shape) - new_shape = [shape[0], self.num_groups, shape[1] // self.num_groups] + shape[2:] - x = x.view(*new_shape) - mean = x.mean(dim=2, keepdim=True) - std = x.std(dim=2, keepdim=True) - x_norm = (x - mean) / (std + 1e-7) - return x_norm.view(*shape) diff --git a/spaces/ZX9966/LOGO-Approximate-Computing-Technology/index.html b/spaces/ZX9966/LOGO-Approximate-Computing-Technology/index.html deleted file mode 100644 index f6ae994532b89eb6df25e770d12cc637eafc68ef..0000000000000000000000000000000000000000 --- a/spaces/ZX9966/LOGO-Approximate-Computing-Technology/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - LOGO Compputing - - - -
    -

    Welcome to LOGO Approximate Computing Technolog!

    -

    The LOGO computing paradigm provides a non-map-reduce computing method that allows you to parallelize many traditional machine learning algorithms in a statistically efficient manner.

    -
    - - diff --git a/spaces/Zheng0211/mybing/Dockerfile b/spaces/Zheng0211/mybing/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/Zheng0211/mybing/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/aadnk/faster-whisper-webui/cli.py b/spaces/aadnk/faster-whisper-webui/cli.py deleted file mode 100644 index 66e8b9480b4b8c93eedabd19af0757bfde580732..0000000000000000000000000000000000000000 --- a/spaces/aadnk/faster-whisper-webui/cli.py +++ /dev/null @@ -1,206 +0,0 @@ -import argparse -import os -import pathlib -from urllib.parse import urlparse -import warnings -import numpy as np - -import torch -from app import VadOptions, WhisperTranscriber -from src.config import VAD_INITIAL_PROMPT_MODE_VALUES, ApplicationConfig, VadInitialPromptMode -from src.diarization.diarization import Diarization -from src.download import download_url -from src.languages import get_language_names - -from src.utils import optional_float, optional_int, str2bool -from src.whisper.whisperFactory import create_whisper_container - -def cli(): - app_config = ApplicationConfig.create_default() - whisper_models = app_config.get_model_names() - - # For the CLI, we fallback to saving the output to the current directory - output_dir = app_config.output_dir if app_config.output_dir is not None else "." - - # Environment variable overrides - default_whisper_implementation = os.environ.get("WHISPER_IMPLEMENTATION", app_config.whisper_implementation) - - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("audio", nargs="+", type=str, \ - help="audio file(s) to transcribe") - parser.add_argument("--model", default=app_config.default_model_name, choices=whisper_models, \ - help="name of the Whisper model to use") # medium - parser.add_argument("--model_dir", type=str, default=app_config.model_dir, \ - help="the path to save model files; uses ~/.cache/whisper by default") - parser.add_argument("--device", default=app_config.device, \ - help="device to use for PyTorch inference") - parser.add_argument("--output_dir", "-o", type=str, default=output_dir, \ - help="directory to save the outputs") - parser.add_argument("--verbose", type=str2bool, default=app_config.verbose, \ - help="whether to print out the progress and debug messages") - parser.add_argument("--whisper_implementation", type=str, default=default_whisper_implementation, choices=["whisper", "faster-whisper"],\ - help="the Whisper implementation to use") - - parser.add_argument("--task", type=str, default=app_config.task, choices=["transcribe", "translate"], \ - help="whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')") - parser.add_argument("--language", type=str, default=app_config.language, choices=sorted(get_language_names()), \ - help="language spoken in the audio, specify None to perform language detection") - - parser.add_argument("--vad", type=str, default=app_config.default_vad, choices=["none", "silero-vad", "silero-vad-skip-gaps", "silero-vad-expand-into-gaps", "periodic-vad"], \ - help="The voice activity detection algorithm to use") # silero-vad - parser.add_argument("--vad_initial_prompt_mode", type=str, default=app_config.vad_initial_prompt_mode, choices=VAD_INITIAL_PROMPT_MODE_VALUES, \ - help="Whether or not to prepend the initial prompt to each VAD segment (prepend_all_segments), or just the first segment (prepend_first_segment)") # prepend_first_segment - parser.add_argument("--vad_merge_window", type=optional_float, default=app_config.vad_merge_window, \ - help="The window size (in seconds) to merge voice segments") - parser.add_argument("--vad_max_merge_size", type=optional_float, default=app_config.vad_max_merge_size,\ - help="The maximum size (in seconds) of a voice segment") - parser.add_argument("--vad_padding", type=optional_float, default=app_config.vad_padding, \ - help="The padding (in seconds) to add to each voice segment") - parser.add_argument("--vad_prompt_window", type=optional_float, default=app_config.vad_prompt_window, \ - help="The window size of the prompt to pass to Whisper") - parser.add_argument("--vad_cpu_cores", type=int, default=app_config.vad_cpu_cores, \ - help="The number of CPU cores to use for VAD pre-processing.") # 1 - parser.add_argument("--vad_parallel_devices", type=str, default=app_config.vad_parallel_devices, \ - help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.") # "" - parser.add_argument("--auto_parallel", type=bool, default=app_config.auto_parallel, \ - help="True to use all available GPUs and CPU cores for processing. Use vad_cpu_cores/vad_parallel_devices to specify the number of CPU cores/GPUs to use.") # False - - parser.add_argument("--temperature", type=float, default=app_config.temperature, \ - help="temperature to use for sampling") - parser.add_argument("--best_of", type=optional_int, default=app_config.best_of, \ - help="number of candidates when sampling with non-zero temperature") - parser.add_argument("--beam_size", type=optional_int, default=app_config.beam_size, \ - help="number of beams in beam search, only applicable when temperature is zero") - parser.add_argument("--patience", type=float, default=app_config.patience, \ - help="optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search") - parser.add_argument("--length_penalty", type=float, default=app_config.length_penalty, \ - help="optional token length penalty coefficient (alpha) as in https://arxiv.org/abs/1609.08144, uses simple lengt normalization by default") - - parser.add_argument("--suppress_tokens", type=str, default=app_config.suppress_tokens, \ - help="comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations") - parser.add_argument("--initial_prompt", type=str, default=app_config.initial_prompt, \ - help="optional text to provide as a prompt for the first window.") - parser.add_argument("--condition_on_previous_text", type=str2bool, default=app_config.condition_on_previous_text, \ - help="if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop") - parser.add_argument("--fp16", type=str2bool, default=app_config.fp16, \ - help="whether to perform inference in fp16; True by default") - parser.add_argument("--compute_type", type=str, default=app_config.compute_type, choices=["default", "auto", "int8", "int8_float16", "int16", "float16", "float32"], \ - help="the compute type to use for inference") - - parser.add_argument("--temperature_increment_on_fallback", type=optional_float, default=app_config.temperature_increment_on_fallback, \ - help="temperature to increase when falling back when the decoding fails to meet either of the thresholds below") - parser.add_argument("--compression_ratio_threshold", type=optional_float, default=app_config.compression_ratio_threshold, \ - help="if the gzip compression ratio is higher than this value, treat the decoding as failed") - parser.add_argument("--logprob_threshold", type=optional_float, default=app_config.logprob_threshold, \ - help="if the average log probability is lower than this value, treat the decoding as failed") - parser.add_argument("--no_speech_threshold", type=optional_float, default=app_config.no_speech_threshold, \ - help="if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence") - - parser.add_argument("--word_timestamps", type=str2bool, default=app_config.word_timestamps, - help="(experimental) extract word-level timestamps and refine the results based on them") - parser.add_argument("--prepend_punctuations", type=str, default=app_config.prepend_punctuations, - help="if word_timestamps is True, merge these punctuation symbols with the next word") - parser.add_argument("--append_punctuations", type=str, default=app_config.append_punctuations, - help="if word_timestamps is True, merge these punctuation symbols with the previous word") - parser.add_argument("--highlight_words", type=str2bool, default=app_config.highlight_words, - help="(requires --word_timestamps True) underline each word as it is spoken in srt and vtt") - parser.add_argument("--threads", type=optional_int, default=0, - help="number of threads used by torch for CPU inference; supercedes MKL_NUM_THREADS/OMP_NUM_THREADS") - - # Diarization - parser.add_argument('--auth_token', type=str, default=app_config.auth_token, help='HuggingFace API Token (optional)') - parser.add_argument("--diarization", type=str2bool, default=app_config.diarization, \ - help="whether to perform speaker diarization") - parser.add_argument("--diarization_num_speakers", type=int, default=app_config.diarization_speakers, help="Number of speakers") - parser.add_argument("--diarization_min_speakers", type=int, default=app_config.diarization_min_speakers, help="Minimum number of speakers") - parser.add_argument("--diarization_max_speakers", type=int, default=app_config.diarization_max_speakers, help="Maximum number of speakers") - - args = parser.parse_args().__dict__ - model_name: str = args.pop("model") - model_dir: str = args.pop("model_dir") - output_dir: str = args.pop("output_dir") - device: str = args.pop("device") - os.makedirs(output_dir, exist_ok=True) - - if (threads := args.pop("threads")) > 0: - torch.set_num_threads(threads) - - whisper_implementation = args.pop("whisper_implementation") - print(f"Using {whisper_implementation} for Whisper") - - if model_name.endswith(".en") and args["language"] not in {"en", "English"}: - warnings.warn(f"{model_name} is an English-only model but receipted '{args['language']}'; using English instead.") - args["language"] = "en" - - temperature = args.pop("temperature") - temperature_increment_on_fallback = args.pop("temperature_increment_on_fallback") - if temperature_increment_on_fallback is not None: - temperature = tuple(np.arange(temperature, 1.0 + 1e-6, temperature_increment_on_fallback)) - else: - temperature = [temperature] - - vad = args.pop("vad") - vad_initial_prompt_mode = args.pop("vad_initial_prompt_mode") - vad_merge_window = args.pop("vad_merge_window") - vad_max_merge_size = args.pop("vad_max_merge_size") - vad_padding = args.pop("vad_padding") - vad_prompt_window = args.pop("vad_prompt_window") - vad_cpu_cores = args.pop("vad_cpu_cores") - auto_parallel = args.pop("auto_parallel") - - compute_type = args.pop("compute_type") - highlight_words = args.pop("highlight_words") - - auth_token = args.pop("auth_token") - diarization = args.pop("diarization") - num_speakers = args.pop("diarization_num_speakers") - min_speakers = args.pop("diarization_min_speakers") - max_speakers = args.pop("diarization_max_speakers") - - transcriber = WhisperTranscriber(delete_uploaded_files=False, vad_cpu_cores=vad_cpu_cores, app_config=app_config) - transcriber.set_parallel_devices(args.pop("vad_parallel_devices")) - transcriber.set_auto_parallel(auto_parallel) - - if diarization: - transcriber.set_diarization(auth_token=auth_token, enable_daemon_process=False, num_speakers=num_speakers, min_speakers=min_speakers, max_speakers=max_speakers) - - model = create_whisper_container(whisper_implementation=whisper_implementation, model_name=model_name, - device=device, compute_type=compute_type, download_root=model_dir, models=app_config.models) - - if (transcriber._has_parallel_devices()): - print("Using parallel devices:", transcriber.parallel_device_list) - - for audio_path in args.pop("audio"): - sources = [] - - # Detect URL and download the audio - if (uri_validator(audio_path)): - # Download from YouTube/URL directly - for source_path in download_url(audio_path, maxDuration=-1, destinationDirectory=output_dir, playlistItems=None): - source_name = os.path.basename(source_path) - sources.append({ "path": source_path, "name": source_name }) - else: - sources.append({ "path": audio_path, "name": os.path.basename(audio_path) }) - - for source in sources: - source_path = source["path"] - source_name = source["name"] - - vadOptions = VadOptions(vad, vad_merge_window, vad_max_merge_size, vad_padding, vad_prompt_window, - VadInitialPromptMode.from_string(vad_initial_prompt_mode)) - - result = transcriber.transcribe_file(model, source_path, temperature=temperature, vadOptions=vadOptions, **args) - - transcriber.write_result(result, source_name, output_dir, highlight_words) - - transcriber.close() - -def uri_validator(x): - try: - result = urlparse(x) - return all([result.scheme, result.netloc]) - except: - return False - -if __name__ == '__main__': - cli() \ No newline at end of file diff --git a/spaces/abbbbbbbbbbbbbb/poetry/app.py b/spaces/abbbbbbbbbbbbbb/poetry/app.py deleted file mode 100644 index efaa75a33c842dfcf61a8f7f9bfad2791bc1ef05..0000000000000000000000000000000000000000 --- a/spaces/abbbbbbbbbbbbbb/poetry/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import gc -import gradio as gr -from transformers import pipeline, set_seed - -pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023') -#gc.collect() -samples = [['أنت' - ,1.0, 25, 0.8, 1.0, 114],['هل غادر' - ,1.0, 25, 0.8, 1.0, 114 ],['ألا ليت' - ,1.0, 25, 0.8, 1.0, 114 ],['يا قدس' - ,1.0, 25, 0.8, 1.0, 114],['عيد بأية حال' - ,1.0, 25, 0.8, 1.0, 114],['لكل شيء إذا ما' - ,1.0, 25, 0.8, 1.0, 114 ],['.' - ,1.0, 25, 0.8, 1.0, 114]] - -notes = """ -- Enter a short prompt or select (click) one of the examples and click SEND -- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values). -- For the same seed (randomness), the same output is regenerated if other parameters are fixed. Seed should be 0 or more (not empty) -- Clear and enter new prompt or select another example and SEND to regenerate -- The '.' means start a new line from no prompt (your prompt need not be long) -- Be patient: this runs on CPU (free tier) -- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859) -- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk. -""" -def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114): - if not int(seed) >= 0: seed=114 - set_seed(seed) - gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty, - min_length = 64, no_repeat_ngram_size = 3, return_full_text=True, - num_beams=5, num_return_sequences=1)[0]["generated_text"] - poetry ="" - for line in gen.split('.')[:-1]: - poetry += line #+ "\n" - return poetry -poetry = gr.Interface(fn=sayPoetry, - inputs=[ - gr.Textbox(label="Enter short prompt or select from examples:"), - gr.Slider(0.50, 1.5, step=0.01,value=1.0, label='control temperature'), - gr.Slider(5, 50, step=1,value=25, label='control top k'), - gr.Slider(0.70, 0.9, step=0.01,value=0.8, label='control top p'), - gr.Slider(0.10, 1.0, step=0.01,value=1.0, label='control penalty'), - gr.Number(value=1359719, precision=0, label='Seed'), - ], - outputs=[gr.Textbox(label="Generated Poetry:")], - - allow_flagging='never', - title='Arabic Poetry Generation Demo (updated Jan. 2023)', - description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)", - examples=samples, - cache_examples=False, - article = notes) -poetry.launch() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py deleted file mode 100644 index 190309fd42a1b76c12c82fc1acf0511494be5ac3..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py +++ /dev/null @@ -1,215 +0,0 @@ -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class LegacyDeltaXYWHBBoxCoder(BaseBBoxCoder): - """Legacy Delta XYWH BBox coder used in MMDet V1.x. - - Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2, - y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) - back to original bbox (x1, y1, x2, y2). - - Note: - The main difference between :class`LegacyDeltaXYWHBBoxCoder` and - :class:`DeltaXYWHBBoxCoder` is whether ``+ 1`` is used during width and - height calculation. We suggest to only use this coder when testing with - MMDet V1.x models. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Args: - target_means (Sequence[float]): denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): denormalizing standard deviation of - target for delta coordinates - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.)): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = legacy_bbox2delta(bboxes, gt_bboxes, self.means, - self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Encoded boxes with shape - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(0) == bboxes.size(0) - decoded_bboxes = legacy_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def legacy_bbox2delta(proposals, - gt, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt in the MMDet V1.x manner. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of `delta2bbox()` - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] + 1.0 - ph = proposals[..., 3] - proposals[..., 1] + 1.0 - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] + 1.0 - gh = gt[..., 3] - gt[..., 1] + 1.0 - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def legacy_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply deltas to shift/scale base boxes in the MMDet V1.x manner. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of `bbox2delta()` - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when - rois is a grid of anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - wh_ratio_clip (float): Maximum aspect ratio for boxes. - - Returns: - Tensor: Boxes with shape (N, 4), where columns represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> legacy_delta2bbox(rois, deltas, max_shape=(32, 32)) - tensor([[0.0000, 0.0000, 1.5000, 1.5000], - [0.0000, 0.0000, 5.2183, 5.2183], - [0.0000, 0.1321, 7.8891, 0.8679], - [5.3967, 2.4251, 6.0033, 3.7749]]) - """ - means = deltas.new_tensor(means).repeat(1, deltas.size(1) // 4) - stds = deltas.new_tensor(stds).repeat(1, deltas.size(1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[:, 0::4] - dy = denorm_deltas[:, 1::4] - dw = denorm_deltas[:, 2::4] - dh = denorm_deltas[:, 3::4] - max_ratio = np.abs(np.log(wh_ratio_clip)) - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Compute center of each roi - px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx) - py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy) - # Compute width/height of each roi - pw = (rois[:, 2] - rois[:, 0] + 1.0).unsqueeze(1).expand_as(dw) - ph = (rois[:, 3] - rois[:, 1] + 1.0).unsqueeze(1).expand_as(dh) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + pw * dx - gy = py + ph * dy - # Convert center-xy/width/height to top-left, bottom-right - - # The true legacy box coder should +- 0.5 here. - # However, current implementation improves the performance when testing - # the models trained in MMDetection 1.X (~0.5 bbox AP, 0.2 mask AP) - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - if max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view_as(deltas) - return bboxes diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/samplers/instance_balanced_pos_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/samplers/instance_balanced_pos_sampler.py deleted file mode 100644 index c735298487e14e4a0ec42913f25673cccb98a8a0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/samplers/instance_balanced_pos_sampler.py +++ /dev/null @@ -1,55 +0,0 @@ -import numpy as np -import torch - -from ..builder import BBOX_SAMPLERS -from .random_sampler import RandomSampler - - -@BBOX_SAMPLERS.register_module() -class InstanceBalancedPosSampler(RandomSampler): - """Instance balanced sampler that samples equal number of positive samples - for each instance.""" - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): The assigned results of boxes. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - unique_gt_inds = assign_result.gt_inds[pos_inds].unique() - num_gts = len(unique_gt_inds) - num_per_gt = int(round(num_expected / float(num_gts)) + 1) - sampled_inds = [] - for i in unique_gt_inds: - inds = torch.nonzero( - assign_result.gt_inds == i.item(), as_tuple=False) - if inds.numel() != 0: - inds = inds.squeeze(1) - else: - continue - if len(inds) > num_per_gt: - inds = self.random_choice(inds, num_per_gt) - sampled_inds.append(inds) - sampled_inds = torch.cat(sampled_inds) - if len(sampled_inds) < num_expected: - num_extra = num_expected - len(sampled_inds) - extra_inds = np.array( - list(set(pos_inds.cpu()) - set(sampled_inds.cpu()))) - if len(extra_inds) > num_extra: - extra_inds = self.random_choice(extra_inds, num_extra) - extra_inds = torch.from_numpy(extra_inds).to( - assign_result.gt_inds.device).long() - sampled_inds = torch.cat([sampled_inds, extra_inds]) - elif len(sampled_inds) > num_expected: - sampled_inds = self.random_choice(sampled_inds, num_expected) - return sampled_inds diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev.py deleted file mode 100644 index 9b53b1f026612be34c2be41f5a45ae4d9e322cce..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev.py +++ /dev/null @@ -1,655 +0,0 @@ -import os -import time -import fcntl -import ctypes -import warnings - -from ctypes import c_uint16 as _u16 -from ctypes import c_int16 as _s16 -from ctypes import c_uint32 as _u32 -from ctypes import c_int32 as _s32 -from ctypes import c_int64 as _s64 -from concurrent.futures import ThreadPoolExecutor - -from typing import List - -import pyglet - -from .evdev_constants import * -from pyglet.app.xlib import XlibSelectDevice -from pyglet.input.base import Device, RelativeAxis, AbsoluteAxis, Button, Joystick, Controller -from pyglet.input.base import DeviceOpenException, ControllerManager -from pyglet.input.controller import get_mapping, Relation, create_guid - -_IOC_NRBITS = 8 -_IOC_TYPEBITS = 8 -_IOC_SIZEBITS = 14 -_IOC_DIRBITS = 2 - -_IOC_NRSHIFT = 0 -_IOC_TYPESHIFT = (_IOC_NRSHIFT + _IOC_NRBITS) -_IOC_SIZESHIFT = (_IOC_TYPESHIFT + _IOC_TYPEBITS) -_IOC_DIRSHIFT = (_IOC_SIZESHIFT + _IOC_SIZEBITS) - -_IOC_NONE = 0 -_IOC_WRITE = 1 -_IOC_READ = 2 - - -def _IOC(dir, type, nr, size): - return ((dir << _IOC_DIRSHIFT) | - (type << _IOC_TYPESHIFT) | - (nr << _IOC_NRSHIFT) | - (size << _IOC_SIZESHIFT)) - - -def _IOR(type, nr, struct): - request = _IOC(_IOC_READ, ord(type), nr, ctypes.sizeof(struct)) - - def f(fileno): - buffer = struct() - fcntl.ioctl(fileno, request, buffer) - return buffer - - return f - - -def _IOR_len(type, nr): - def f(fileno, buffer): - request = _IOC(_IOC_READ, ord(type), nr, ctypes.sizeof(buffer)) - fcntl.ioctl(fileno, request, buffer) - return buffer - - return f - - -def _IOR_str(type, nr): - g = _IOR_len(type, nr) - - def f(fileno, length=256): - return g(fileno, ctypes.create_string_buffer(length)).value - - return f - - -def _IOW(type, nr): - - def f(fileno, buffer): - request = _IOC(_IOC_WRITE, ord(type), nr, ctypes.sizeof(buffer)) - fcntl.ioctl(fileno, request, buffer) - - return f - - -# Structures from /linux/blob/master/include/uapi/linux/input.h - -class Timeval(ctypes.Structure): - _fields_ = ( - ('tv_sec', _s64), - ('tv_usec', _s64) - ) - - -class InputEvent(ctypes.Structure): - _fields_ = ( - ('time', Timeval), - ('type', _u16), - ('code', _u16), - ('value', _s32) - ) - - -class InputID(ctypes.Structure): - _fields_ = ( - ('bustype', _u16), - ('vendor', _u16), - ('product', _u16), - ('version', _u16), - ) - - -class InputABSInfo(ctypes.Structure): - _fields_ = ( - ('value', _s32), - ('minimum', _s32), - ('maximum', _s32), - ('fuzz', _s32), - ('flat', _s32), - ) - - -class FFReplay(ctypes.Structure): - _fields_ = ( - ('length', _u16), - ('delay', _u16) - ) - - -class FFTrigger(ctypes.Structure): - _fields_ = ( - ('button', _u16), - ('interval', _u16) - ) - - -class FFEnvelope(ctypes.Structure): - _fields_ = [ - ('attack_length', _u16), - ('attack_level', _u16), - ('fade_length', _u16), - ('fade_level', _u16), - ] - - -class FFConstantEffect(ctypes.Structure): - _fields_ = [ - ('level', _s16), - ('ff_envelope', FFEnvelope), - ] - - -class FFRampEffect(ctypes.Structure): - _fields_ = [ - ('start_level', _s16), - ('end_level', _s16), - ('ff_envelope', FFEnvelope), - ] - - -class FFConditionEffect(ctypes.Structure): - _fields_ = [ - ('right_saturation', _u16), - ('left_saturation', _u16), - ('right_coeff', _s16), - ('left_coeff', _s16), - ('deadband', _u16), - ('center', _s16), - ] - - -class FFPeriodicEffect(ctypes.Structure): - _fields_ = [ - ('waveform', _u16), - ('period', _u16), - ('magnitude', _s16), - ('offset', _s16), - ('phase', _u16), - ('envelope', FFEnvelope), - ('custom_len', _u32), - ('custom_data', ctypes.POINTER(_s16)), - ] - - -class FFRumbleEffect(ctypes.Structure): - _fields_ = ( - ('strong_magnitude', _u16), - ('weak_magnitude', _u16) - ) - - -class FFEffectType(ctypes.Union): - _fields_ = ( - ('ff_constant_effect', FFConstantEffect), - ('ff_ramp_effect', FFRampEffect), - ('ff_periodic_effect', FFPeriodicEffect), - ('ff_condition_effect', FFConditionEffect * 2), - ('ff_rumble_effect', FFRumbleEffect), - ) - - -class FFEvent(ctypes.Structure): - _fields_ = ( - ('type', _u16), - ('id', _s16), - ('direction', _u16), - ('ff_trigger', FFTrigger), - ('ff_replay', FFReplay), - ('u', FFEffectType) - ) - - -EVIOCGVERSION = _IOR('E', 0x01, ctypes.c_int) -EVIOCGID = _IOR('E', 0x02, InputID) -EVIOCGNAME = _IOR_str('E', 0x06) -EVIOCGPHYS = _IOR_str('E', 0x07) -EVIOCGUNIQ = _IOR_str('E', 0x08) -EVIOCSFF = _IOW('E', 0x80) - - -def EVIOCGBIT(fileno, ev, buffer): - return _IOR_len('E', 0x20 + ev)(fileno, buffer) - - -def EVIOCGABS(fileno, abs): - buffer = InputABSInfo() - return _IOR_len('E', 0x40 + abs)(fileno, buffer) - - -def get_set_bits(bytestring): - bits = set() - j = 0 - for byte in bytestring: - for i in range(8): - if byte & 1: - bits.add(j + i) - byte >>= 1 - j += 8 - return bits - - -_abs_names = { - ABS_X: AbsoluteAxis.X, - ABS_Y: AbsoluteAxis.Y, - ABS_Z: AbsoluteAxis.Z, - ABS_RX: AbsoluteAxis.RX, - ABS_RY: AbsoluteAxis.RY, - ABS_RZ: AbsoluteAxis.RZ, - ABS_HAT0X: AbsoluteAxis.HAT_X, - ABS_HAT0Y: AbsoluteAxis.HAT_Y, -} - -_rel_names = { - REL_X: RelativeAxis.X, - REL_Y: RelativeAxis.Y, - REL_Z: RelativeAxis.Z, - REL_RX: RelativeAxis.RX, - REL_RY: RelativeAxis.RY, - REL_RZ: RelativeAxis.RZ, - REL_WHEEL: RelativeAxis.WHEEL, -} - - -def _create_control(fileno, event_type, event_code): - if event_type == EV_ABS: - raw_name = abs_raw_names.get(event_code, 'EV_ABS(%x)' % event_code) - name = _abs_names.get(event_code) - absinfo = EVIOCGABS(fileno, event_code) - value = absinfo.value - minimum = absinfo.minimum - maximum = absinfo.maximum - control = AbsoluteAxis(name, minimum, maximum, raw_name) - control.value = value - if name == 'hat_y': - control.inverted = True - elif event_type == EV_REL: - raw_name = rel_raw_names.get(event_code, 'EV_REL(%x)' % event_code) - name = _rel_names.get(event_code) - # TODO min/max? - control = RelativeAxis(name, raw_name) - elif event_type == EV_KEY: - raw_name = key_raw_names.get(event_code, 'EV_KEY(%x)' % event_code) - name = None - control = Button(name, raw_name) - else: - value = minimum = maximum = 0 # TODO - return None - control._event_type = event_type - control._event_code = event_code - return control - - -event_types = { - EV_KEY: KEY_MAX, - EV_REL: REL_MAX, - EV_ABS: ABS_MAX, - EV_MSC: MSC_MAX, - EV_LED: LED_MAX, - EV_SND: SND_MAX, - EV_FF: FF_MAX, -} - - -class EvdevDevice(XlibSelectDevice, Device): - _fileno = None - - def __init__(self, display, filename): - self._filename = filename - - fileno = os.open(filename, os.O_RDONLY) - # event_version = EVIOCGVERSION(fileno).value - - self._id = EVIOCGID(fileno) - self.id_bustype = self._id.bustype - self.id_vendor = hex(self._id.vendor) - self.id_product = hex(self._id.product) - self.id_version = self._id.version - - name = EVIOCGNAME(fileno) - try: - name = name.decode('utf-8') - except UnicodeDecodeError: - try: - name = name.decode('latin-1') - except UnicodeDecodeError: - pass - - try: - self.phys = EVIOCGPHYS(fileno) - except OSError: - self.phys = '' - try: - self.uniq = EVIOCGUNIQ(fileno) - except OSError: - self.uniq = '' - - self.controls = [] - self.control_map = {} - self.ff_types = [] - - event_types_bits = (ctypes.c_byte * 4)() - EVIOCGBIT(fileno, 0, event_types_bits) - for event_type in get_set_bits(event_types_bits): - if event_type not in event_types: - continue - max_code = event_types[event_type] - nbytes = max_code // 8 + 1 - event_codes_bits = (ctypes.c_byte * nbytes)() - EVIOCGBIT(fileno, event_type, event_codes_bits) - if event_type == EV_FF: - self.ff_types.extend(get_set_bits(event_codes_bits)) - else: - for event_code in get_set_bits(event_codes_bits): - control = _create_control(fileno, event_type, event_code) - if control: - self.control_map[(event_type, event_code)] = control - self.controls.append(control) - - self.controls.sort(key=lambda c: c._event_code) - os.close(fileno) - - super().__init__(display, name) - - def get_guid(self): - """Get the device's SDL2 style GUID string""" - _id = self._id - return create_guid(_id.bustype, _id.vendor, _id.product, _id.version, self.name, 0, 0) - - def open(self, window=None, exclusive=False): - super().open(window, exclusive) - - try: - self._fileno = os.open(self._filename, os.O_RDWR | os.O_NONBLOCK) - except OSError as e: - raise DeviceOpenException(e) - - pyglet.app.platform_event_loop.select_devices.add(self) - - def close(self): - super().close() - - if not self._fileno: - return - - pyglet.app.platform_event_loop.select_devices.remove(self) - os.close(self._fileno) - self._fileno = None - - def get_controls(self): - return self.controls - - # Force Feedback methods - - def ff_upload_effect(self, structure): - os.write(self._fileno, structure) - - # XlibSelectDevice interface - - def fileno(self): - return self._fileno - - def poll(self): - return False - - def select(self): - if not self._fileno: - return - - try: - events = (InputEvent * 64)() - bytes_read = os.readv(self._fileno, events) - except OSError: - self.close() - return - - n_events = bytes_read // ctypes.sizeof(InputEvent) - for event in events[:n_events]: - try: - control = self.control_map[(event.type, event.code)] - control.value = event.value - except KeyError: - pass - - -class FFController(Controller): - """Controller that supports force-feedback""" - _fileno = None - _weak_effect = None - _play_weak_event = None - _stop_weak_event = None - _strong_effect = None - _play_strong_event = None - _stop_strong_event = None - - def open(self, window=None, exclusive=False): - super().open(window, exclusive) - self._fileno = self.device.fileno() - # Create Force Feedback effects & events when opened: - # https://www.kernel.org/doc/html/latest/input/ff.html - self._weak_effect = FFEvent(FF_RUMBLE, -1) - EVIOCSFF(self._fileno, self._weak_effect) - self._play_weak_event = InputEvent(Timeval(), EV_FF, self._weak_effect.id, 1) - self._stop_weak_event = InputEvent(Timeval(), EV_FF, self._weak_effect.id, 0) - - self._strong_effect = FFEvent(FF_RUMBLE, -1) - EVIOCSFF(self._fileno, self._strong_effect) - self._play_strong_event = InputEvent(Timeval(), EV_FF, self._strong_effect.id, 1) - self._stop_strong_event = InputEvent(Timeval(), EV_FF, self._strong_effect.id, 0) - - def rumble_play_weak(self, strength=1.0, duration=0.5): - effect = self._weak_effect - effect.u.ff_rumble_effect.weak_magnitude = int(max(min(1.0, strength), 0) * 0xFFFF) - effect.ff_replay.length = int(duration * 1000) - EVIOCSFF(self._fileno, effect) - self.device.ff_upload_effect(self._play_weak_event) - - def rumble_play_strong(self, strength=1.0, duration=0.5): - effect = self._strong_effect - effect.u.ff_rumble_effect.strong_magnitude = int(max(min(1.0, strength), 0) * 0xFFFF) - effect.ff_replay.length = int(duration * 1000) - EVIOCSFF(self._fileno, effect) - self.device.ff_upload_effect(self._play_strong_event) - - def rumble_stop_weak(self): - self.device.ff_upload_effect(self._stop_weak_event) - - def rumble_stop_strong(self): - self.device.ff_upload_effect(self._stop_strong_event) - - -class EvdevControllerManager(ControllerManager, XlibSelectDevice): - - def __init__(self, display=None): - super().__init__() - self._display = display - self._devices_file = open('/proc/bus/input/devices') - self._device_names = self._get_device_names() - self._controllers = {} - self._thread_pool = ThreadPoolExecutor(max_workers=1) - - for name in self._device_names: - path = os.path.join('/dev/input', name) - try: - device = EvdevDevice(self._display, path) - except OSError: - continue - controller = _create_controller(device) - if controller: - self._controllers[name] = controller - - pyglet.app.platform_event_loop.select_devices.add(self) - - def __del__(self): - self._devices_file.close() - - def fileno(self): - """Allow this class to be Selectable""" - return self._devices_file.fileno() - - @staticmethod - def _get_device_names(): - return {name for name in os.listdir('/dev/input') if name.startswith('event')} - - def _make_device_callback(self, future): - name, device = future.result() - if not device: - return - - if name in self._controllers: - controller = self._controllers.get(name) - else: - controller = _create_controller(device) - self._controllers[name] = controller - - if controller: - # Dispatch event in main thread: - pyglet.app.platform_event_loop.post_event(self, 'on_connect', controller) - - def _make_device(self, name, count=1): - path = os.path.join('/dev/input', name) - while count > 0: - try: - return name, EvdevDevice(self._display, path) - except OSError: - if count > 0: - time.sleep(0.1) - count -= 1 - return None, None - - def select(self): - """Triggered whenever the devices_file changes.""" - new_device_files = self._get_device_names() - appeared = new_device_files - self._device_names - disappeared = self._device_names - new_device_files - self._device_names = new_device_files - - for name in appeared: - future = self._thread_pool.submit(self._make_device, name, count=10) - future.add_done_callback(self._make_device_callback) - - for name in disappeared: - controller = self._controllers.get(name) - if controller: - self.dispatch_event('on_disconnect', controller) - - def get_controllers(self) -> List[Controller]: - return list(self._controllers.values()) - - -def get_devices(display=None): - _devices = {} - base = '/dev/input' - for filename in os.listdir(base): - if filename.startswith('event'): - path = os.path.join(base, filename) - if path in _devices: - continue - - try: - _devices[path] = EvdevDevice(display, path) - except OSError: - pass - - return list(_devices.values()) - - -def _create_joystick(device): - # Look for something with an ABS X and ABS Y axis, and a joystick 0 button - have_x = False - have_y = False - have_button = False - for control in device.controls: - if control._event_type == EV_ABS and control._event_code == ABS_X: - have_x = True - elif control._event_type == EV_ABS and control._event_code == ABS_Y: - have_y = True - elif control._event_type == EV_KEY and control._event_code in (BTN_JOYSTICK, BTN_GAMEPAD): - have_button = True - if not (have_x and have_y and have_button): - return - - return Joystick(device) - - -def get_joysticks(display=None): - return [joystick for joystick in - [_create_joystick(device) for device in get_devices(display)] - if joystick is not None] - - -def _detect_controller_mapping(device): - # If no explicit mapping is available, we can - # detect it from the Linux gamepad specification: - # https://www.kernel.org/doc/html/v4.13/input/gamepad.html - # Note: legacy device drivers don't always adhere to this. - mapping = dict(guid=device.get_guid(), name=device.name) - - _aliases = {BTN_MODE: 'guide', BTN_SELECT: 'back', BTN_START: 'start', - BTN_SOUTH: 'a', BTN_EAST: 'b', BTN_WEST: 'x', BTN_NORTH: 'y', - BTN_TL: 'leftshoulder', BTN_TR: 'rightshoulder', - BTN_TL2: 'lefttrigger', BTN_TR2: 'righttrigger', - BTN_THUMBL: 'leftstick', BTN_THUMBR: 'rightstick', - BTN_DPAD_UP: 'dpup', BTN_DPAD_DOWN: 'dpdown', - BTN_DPAD_LEFT: 'dpleft', BTN_DPAD_RIGHT: 'dpright', - - ABS_HAT0X: 'dpleft', # 'dpright', - ABS_HAT0Y: 'dpup', # 'dpdown', - ABS_Z: 'lefttrigger', ABS_RZ: 'righttrigger', - ABS_X: 'leftx', ABS_Y: 'lefty', ABS_RX: 'rightx', ABS_RY: 'righty'} - - button_controls = [control for control in device.controls if isinstance(control, Button)] - axis_controls = [control for control in device.controls if isinstance(control, AbsoluteAxis)] - hat_controls = [control for control in device.controls if control.name in ('hat_x', 'hat_y')] - - for i, control in enumerate(button_controls): - name = _aliases.get(control._event_code) - if name: - mapping[name] = Relation('button', i) - - for i, control in enumerate(axis_controls): - name = _aliases.get(control._event_code) - if name: - mapping[name] = Relation('axis', i) - - for i, control in enumerate(hat_controls): - name = _aliases.get(control._event_code) - if name: - index = 1 + i << 1 - mapping[name] = Relation('hat0', index) - - return mapping - - -def _create_controller(device): - for control in device.controls: - if control._event_type == EV_KEY and control._event_code == BTN_GAMEPAD: - break - else: - return None # Game Controllers must have a BTN_GAMEPAD - - mapping = get_mapping(device.get_guid()) - if not mapping: - warnings.warn(f"Warning: {device} (GUID: {device.get_guid()}) " - f"has no controller mappings. Update the mappings in the Controller DB.\n" - f"Auto-detecting as defined by the 'Linux gamepad specification'") - mapping = _detect_controller_mapping(device) - - if FF_RUMBLE in device.ff_types: - return FFController(device, mapping) - else: - return Controller(device, mapping) - - -def get_controllers(display=None): - return [controller for controller in - [_create_controller(device) for device in get_devices(display)] - if controller is not None] diff --git a/spaces/akhaliq/Mask2Former/tools/convert-pretrained-swin-model-to-d2.py b/spaces/akhaliq/Mask2Former/tools/convert-pretrained-swin-model-to-d2.py deleted file mode 100644 index 8fbaeab990743e60bfe9f6197cbf3bf8585abcb6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/tools/convert-pretrained-swin-model-to-d2.py +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import pickle as pkl -import sys - -import torch - -""" -Usage: - # download pretrained swin model: - wget https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth - # run the conversion - ./convert-pretrained-model-to-d2.py swin_tiny_patch4_window7_224.pth swin_tiny_patch4_window7_224.pkl - # Then, use swin_tiny_patch4_window7_224.pkl with the following changes in config: -MODEL: - WEIGHTS: "/path/to/swin_tiny_patch4_window7_224.pkl" -INPUT: - FORMAT: "RGB" -""" - -if __name__ == "__main__": - input = sys.argv[1] - - obj = torch.load(input, map_location="cpu")["model"] - - res = {"model": obj, "__author__": "third_party", "matching_heuristics": True} - - with open(sys.argv[2], "wb") as f: - pkl.dump(res, f) diff --git a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/summscreen.py b/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/summscreen.py deleted file mode 100644 index 871b2fbaf273847aa6165b5f232fee6d1f568027..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/summscreen.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import json -import datasets - - -"""Summscreen dataset.""" - - -_CITATION = """ -@article{DBLP:journals/corr/abs-2104-07091, - author = {Mingda Chen and - Zewei Chu and - Sam Wiseman and - Kevin Gimpel}, - title = {SummScreen: {A} Dataset for Abstractive Screenplay Summarization}, - journal = {CoRR}, - volume = {abs/2104.07091}, - year = {2021}, - url = {https://arxiv.org/abs/2104.07091}, - archivePrefix = {arXiv}, - eprint = {2104.07091}, - timestamp = {Mon, 19 Apr 2021 16:45:47 +0200}, - biburl = {https://dblp.org/rec/journals/corr/abs-2104-07091.bib}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -""" - -_DESCRIPTION = """ -A summary of scientific papers should ideally incorporate the impact of the papers on the research community -reflected by citations. To facilitate research in citation-aware scientific paper summarization (Scisumm), -the CL-Scisumm shared task has been organized since 2014 for papers in the computational linguistics and NLP domain. -""" - -_HOMEPAGE = "https://github.com/mingdachen/SummScreen" - -_LICENSE = "MIT Licencse" - -_URLs = "https://drive.google.com/uc?id=1BvdIllGBo9d2-bzXQRzWuJXB04XPVmfF" - - -class SummertimeSummscreen(datasets.GeneratorBasedBuilder): - """Summscreen dataset.""" - - VERSION = datasets.Version("1.1.0") - - BUILDER_CONFIGS = [ - datasets.BuilderConfig(), - ] - - def _info(self): - features = datasets.Features( - { - "entry_number": datasets.Value("string"), - "transcript": datasets.features.Sequence(datasets.Value("string")), - "recap": datasets.Value("string"), - } - ) - return datasets.DatasetInfo( - description=_DESCRIPTION, - features=features, - supervised_keys=None, - homepage=_HOMEPAGE, - license=_LICENSE, - citation=_CITATION, - ) - - def _split_generators(self, dl_manager): - """Returns SplitGenerators.""" - my_urls = _URLs - path = dl_manager.download_and_extract(my_urls) - path = os.path.join(path, "SummScreen") - - trainpath_fd = os.path.join("ForeverDreaming", "fd_train.json") - trainpath_tms = os.path.join("TVMegaSite", "tms_train.json") - trainpaths = [trainpath_fd, trainpath_tms] - - devpath_fd = os.path.join("ForeverDreaming", "fd_dev.json") - devpath_tms = os.path.join("TVMegaSite", "tms_dev.json") - devpaths = [devpath_fd, devpath_tms] - - testpath_fd = os.path.join("ForeverDreaming", "fd_test.json") - testpath_tms = os.path.join("TVMegaSite", "tms_test.json") - testpaths = [testpath_fd, testpath_tms] - - return [ - datasets.SplitGenerator( - name=datasets.Split.TRAIN, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepaths": (path, trainpaths), "split": "train"}, - ), - datasets.SplitGenerator( - name=datasets.Split.VALIDATION, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepaths": (path, devpaths), "split": "dev"}, - ), - datasets.SplitGenerator( - name=datasets.Split.TEST, - # These kwargs will be passed to _generate_examples - gen_kwargs={"filepaths": (path, testpaths), "split": "test"}, - ), - ] - - def _generate_examples(self, filepaths, split): - """Yields examples.""" - - path, relative_filepaths = filepaths - for filepath in relative_filepaths: - - extraction_path = os.path.join(path, filepath) - - with open(extraction_path, "r") as f: - for line in f: - processed_line = line.replace("@@ ", "") - instance = json.loads(processed_line) - - entry = {} - entry["entry_number"] = instance["filename"] - entry["transcript"] = instance["Transcript"] - entry["recap"] = instance["Recap"][ - 0 - ] # Recap is a single string in list - - yield entry["entry_number"], entry diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_parallel_wavegan.py b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_parallel_wavegan.py deleted file mode 100644 index cab84451170823d00ae529e4c5a97d3fd3eb98d1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/test/test_parallel_wavegan.py +++ /dev/null @@ -1,354 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -import logging - -import numpy as np -import pytest -import torch - -from parallel_wavegan.losses import DiscriminatorAdversarialLoss -from parallel_wavegan.losses import GeneratorAdversarialLoss -from parallel_wavegan.losses import MultiResolutionSTFTLoss -from parallel_wavegan.models import ParallelWaveGANDiscriminator -from parallel_wavegan.models import ParallelWaveGANGenerator -from parallel_wavegan.models import ResidualParallelWaveGANDiscriminator -from parallel_wavegan.optimizers import RAdam - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", -) - - -def make_generator_args(**kwargs): - defaults = dict( - in_channels=1, - out_channels=1, - kernel_size=3, - layers=6, - stacks=3, - residual_channels=8, - gate_channels=16, - skip_channels=8, - aux_channels=10, - aux_context_window=0, - dropout=1 - 0.95, - use_weight_norm=True, - use_causal_conv=False, - upsample_conditional_features=True, - upsample_net="ConvInUpsampleNetwork", - upsample_params={"upsample_scales": [4, 4]}, - ) - defaults.update(kwargs) - return defaults - - -def make_discriminator_args(**kwargs): - defaults = dict( - in_channels=1, - out_channels=1, - kernel_size=3, - layers=5, - conv_channels=16, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - bias=True, - use_weight_norm=True, - ) - defaults.update(kwargs) - return defaults - - -def make_residual_discriminator_args(**kwargs): - defaults = dict( - in_channels=1, - out_channels=1, - kernel_size=3, - layers=10, - stacks=1, - residual_channels=8, - gate_channels=16, - skip_channels=8, - dropout=0.0, - use_weight_norm=True, - use_causal_conv=False, - nonlinear_activation_params={"negative_slope": 0.2}, - ) - defaults.update(kwargs) - return defaults - - -def make_mutli_reso_stft_loss_args(**kwargs): - defaults = dict( - fft_sizes=[64, 128, 256], - hop_sizes=[32, 64, 128], - win_lengths=[48, 96, 192], - window="hann_window", - ) - defaults.update(kwargs) - return defaults - - -@pytest.mark.parametrize( - "dict_g, dict_d, dict_loss", - [ - ({}, {}, {}), - ({"layers": 1, "stacks": 1}, {}, {}), - ({}, {"layers": 1}, {}), - ({"kernel_size": 5}, {}, {}), - ({}, {"kernel_size": 5}, {}), - ({"gate_channels": 8}, {}, {}), - ({"stacks": 1}, {}, {}), - ({"use_weight_norm": False}, {"use_weight_norm": False}, {}), - ({"aux_context_window": 2}, {}, {}), - ({"upsample_net": "UpsampleNetwork"}, {}, {}), - ( - {"upsample_params": {"upsample_scales": [4], "freq_axis_kernel_size": 3}}, - {}, - {}, - ), - ( - { - "upsample_params": { - "upsample_scales": [4], - "nonlinear_activation": "ReLU", - } - }, - {}, - {}, - ), - ( - { - "upsample_conditional_features": False, - "upsample_params": {"upsample_scales": [1]}, - }, - {}, - {}, - ), - ({}, {"nonlinear_activation": "ReLU", "nonlinear_activation_params": {}}, {}), - ({"use_causal_conv": True}, {}, {}), - ({"use_causal_conv": True, "upsample_net": "UpsampleNetwork"}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 1}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 2}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 3}, {}, {}), - ( - { - "aux_channels": 16, - "upsample_net": "MelGANGenerator", - "upsample_params": { - "upsample_scales": [4, 4], - "in_channels": 16, - "out_channels": 16, - }, - }, - {}, - {}, - ), - ], -) -def test_parallel_wavegan_trainable(dict_g, dict_d, dict_loss): - # setup - batch_size = 4 - batch_length = 4096 - args_g = make_generator_args(**dict_g) - args_d = make_discriminator_args(**dict_d) - args_loss = make_mutli_reso_stft_loss_args(**dict_loss) - z = torch.randn(batch_size, 1, batch_length) - y = torch.randn(batch_size, 1, batch_length) - c = torch.randn( - batch_size, - args_g["aux_channels"], - batch_length // np.prod(args_g["upsample_params"]["upsample_scales"]) - + 2 * args_g["aux_context_window"], - ) - model_g = ParallelWaveGANGenerator(**args_g) - model_d = ParallelWaveGANDiscriminator(**args_d) - aux_criterion = MultiResolutionSTFTLoss(**args_loss) - gen_adv_criterion = GeneratorAdversarialLoss() - dis_adv_criterion = DiscriminatorAdversarialLoss() - optimizer_g = RAdam(model_g.parameters()) - optimizer_d = RAdam(model_d.parameters()) - - # check generator trainable - y_hat = model_g(z, c) - p_hat = model_d(y_hat) - adv_loss = gen_adv_criterion(p_hat) - sc_loss, mag_loss = aux_criterion(y_hat, y) - aux_loss = sc_loss + mag_loss - loss_g = adv_loss + aux_loss - optimizer_g.zero_grad() - loss_g.backward() - optimizer_g.step() - - # check discriminator trainable - p = model_d(y) - p_hat = model_d(y_hat.detach()) - real_loss, fake_loss = dis_adv_criterion(p_hat, p) - loss_d = real_loss + fake_loss - optimizer_d.zero_grad() - loss_d.backward() - optimizer_d.step() - - -@pytest.mark.parametrize( - "dict_g, dict_d, dict_loss", - [ - ({}, {}, {}), - ({"layers": 1, "stacks": 1}, {}, {}), - ({}, {"layers": 1}, {}), - ({"kernel_size": 5}, {}, {}), - ({}, {"kernel_size": 5}, {}), - ({"gate_channels": 8}, {}, {}), - ({"stacks": 1}, {}, {}), - ({"use_weight_norm": False}, {"use_weight_norm": False}, {}), - ({"aux_context_window": 2}, {}, {}), - ({"upsample_net": "UpsampleNetwork"}, {}, {}), - ( - {"upsample_params": {"upsample_scales": [4], "freq_axis_kernel_size": 3}}, - {}, - {}, - ), - ( - { - "upsample_params": { - "upsample_scales": [4], - "nonlinear_activation": "ReLU", - } - }, - {}, - {}, - ), - ( - { - "upsample_conditional_features": False, - "upsample_params": {"upsample_scales": [1]}, - }, - {}, - {}, - ), - ({}, {"nonlinear_activation": "ReLU", "nonlinear_activation_params": {}}, {}), - ({"use_causal_conv": True}, {}, {}), - ({"use_causal_conv": True, "upsample_net": "UpsampleNetwork"}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 1}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 2}, {}, {}), - ({"use_causal_conv": True, "aux_context_window": 3}, {}, {}), - ( - { - "aux_channels": 16, - "upsample_net": "MelGANGenerator", - "upsample_params": { - "upsample_scales": [4, 4], - "in_channels": 16, - "out_channels": 16, - }, - }, - {}, - {}, - ), - ], -) -def test_parallel_wavegan_with_residual_discriminator_trainable( - dict_g, dict_d, dict_loss -): - # setup - batch_size = 4 - batch_length = 4096 - args_g = make_generator_args(**dict_g) - args_d = make_residual_discriminator_args(**dict_d) - args_loss = make_mutli_reso_stft_loss_args(**dict_loss) - z = torch.randn(batch_size, 1, batch_length) - y = torch.randn(batch_size, 1, batch_length) - c = torch.randn( - batch_size, - args_g["aux_channels"], - batch_length // np.prod(args_g["upsample_params"]["upsample_scales"]) - + 2 * args_g["aux_context_window"], - ) - model_g = ParallelWaveGANGenerator(**args_g) - model_d = ResidualParallelWaveGANDiscriminator(**args_d) - aux_criterion = MultiResolutionSTFTLoss(**args_loss) - gen_adv_criterion = GeneratorAdversarialLoss() - dis_adv_criterion = DiscriminatorAdversarialLoss() - optimizer_g = RAdam(model_g.parameters()) - optimizer_d = RAdam(model_d.parameters()) - - # check generator trainable - y_hat = model_g(z, c) - p_hat = model_d(y_hat) - adv_loss = gen_adv_criterion(p_hat) - sc_loss, mag_loss = aux_criterion(y_hat, y) - aux_loss = sc_loss + mag_loss - loss_g = adv_loss + aux_loss - optimizer_g.zero_grad() - loss_g.backward() - optimizer_g.step() - - # check discriminator trainable - p = model_d(y) - p_hat = model_d(y_hat.detach()) - real_loss, fake_loss = dis_adv_criterion(p_hat, p) - loss_d = real_loss + fake_loss - optimizer_d.zero_grad() - loss_d.backward() - optimizer_d.step() - - -@pytest.mark.parametrize( - "upsample_net, aux_context_window", - [ - ("ConvInUpsampleNetwork", 0), - ("ConvInUpsampleNetwork", 1), - ("ConvInUpsampleNetwork", 2), - ("ConvInUpsampleNetwork", 3), - ("UpsampleNetwork", 0), - ], -) -def test_causal_parallel_wavegan(upsample_net, aux_context_window): - batch_size = 1 - batch_length = 4096 - args_g = make_generator_args( - use_causal_conv=True, - upsample_net=upsample_net, - aux_context_window=aux_context_window, - dropout=0.0, - ) - model_g = ParallelWaveGANGenerator(**args_g) - z = torch.randn(batch_size, 1, batch_length) - c = torch.randn( - batch_size, - args_g["aux_channels"], - batch_length // np.prod(args_g["upsample_params"]["upsample_scales"]), - ) - - z_ = z.clone() - c_ = c.clone() - z_[..., z.size(-1) // 2 :] = torch.randn(z[..., z.size(-1) // 2 :].shape) - c_[..., c.size(-1) // 2 :] = torch.randn(c[..., c.size(-1) // 2 :].shape) - c = torch.nn.ConstantPad1d(args_g["aux_context_window"], 0.0)(c) - c_ = torch.nn.ConstantPad1d(args_g["aux_context_window"], 0.0)(c_) - try: - # check not equal - np.testing.assert_array_equal(c.numpy(), c_.numpy()) - except AssertionError: - pass - else: - raise AssertionError("Must be different.") - try: - # check not equal - np.testing.assert_array_equal(z.numpy(), z_.numpy()) - except AssertionError: - pass - else: - raise AssertionError("Must be different.") - - # check causality - y = model_g(z, c) - y_ = model_g(z_, c_) - np.testing.assert_array_equal( - y[..., : y.size(-1) // 2].detach().cpu().numpy(), - y_[..., : y_.size(-1) // 2].detach().cpu().numpy(), - ) diff --git a/spaces/ali-ghamdan/gfp-Gans/gfpgan/archs/stylegan2_bilinear_arch.py b/spaces/ali-ghamdan/gfp-Gans/gfpgan/archs/stylegan2_bilinear_arch.py deleted file mode 100644 index 1342ee3c9a6b8f742fb76ce7d5b907cd39fbc350..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/gfpgan/archs/stylegan2_bilinear_arch.py +++ /dev/null @@ -1,613 +0,0 @@ -import math -import random -import torch -from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn -from torch.nn import functional as F - - -class NormStyleCode(nn.Module): - - def forward(self, x): - """Normalize the style codes. - - Args: - x (Tensor): Style codes with shape (b, c). - - Returns: - Tensor: Normalized tensor. - """ - return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - - -class EqualLinear(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Size of each sample. - out_channels (int): Size of each output sample. - bias (bool): If set to ``False``, the layer will not learn an additive - bias. Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - lr_mul (float): Learning rate multiplier. Default: 1. - activation (None | str): The activation after ``linear`` operation. - Supported: 'fused_lrelu', None. Default: None. - """ - - def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None): - super(EqualLinear, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.lr_mul = lr_mul - self.activation = activation - if self.activation not in ['fused_lrelu', None]: - raise ValueError(f'Wrong activation value in EqualLinear: {activation}' - "Supported ones are: ['fused_lrelu', None].") - self.scale = (1 / math.sqrt(in_channels)) * lr_mul - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - if self.bias is None: - bias = None - else: - bias = self.bias * self.lr_mul - if self.activation == 'fused_lrelu': - out = F.linear(x, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - else: - out = F.linear(x, self.weight * self.scale, bias=bias) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, bias={self.bias is not None})') - - -class ModulatedConv2d(nn.Module): - """Modulated Conv2d used in StyleGAN2. - - There is no bias in ModulatedConv2d. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether to demodulate in the conv layer. - Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - eps (float): A value added to the denominator for numerical stability. - Default: 1e-8. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - eps=1e-8, - interpolation_mode='bilinear'): - super(ModulatedConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.demodulate = demodulate - self.sample_mode = sample_mode - self.eps = eps - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - # modulation inside each modulated conv - self.modulation = EqualLinear( - num_style_feat, in_channels, bias=True, bias_init_val=1, lr_mul=1, activation=None) - - self.weight = nn.Parameter(torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)) - self.padding = kernel_size // 2 - - def forward(self, x, style): - """Forward function. - - Args: - x (Tensor): Tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - - Returns: - Tensor: Modulated tensor after convolution. - """ - b, c, h, w = x.shape # c = c_in - # weight modulation - style = self.modulation(style).view(b, 1, c, 1, 1) - # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1) - weight = self.scale * self.weight * style # (b, c_out, c_in, k, k) - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps) - weight = weight * demod.view(b, self.out_channels, 1, 1, 1) - - weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size) - - if self.sample_mode == 'upsample': - x = F.interpolate(x, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - elif self.sample_mode == 'downsample': - x = F.interpolate(x, scale_factor=0.5, mode=self.interpolation_mode, align_corners=self.align_corners) - - b, c, h, w = x.shape - x = x.view(1, b * c, h, w) - # weight: (b*c_out, c_in, k, k), groups=b - out = F.conv2d(x, weight, padding=self.padding, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size}, ' - f'demodulate={self.demodulate}, sample_mode={self.sample_mode})') - - -class StyleConv(nn.Module): - """Style conv. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether demodulate in the conv layer. Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode='bilinear'): - super(StyleConv, self).__init__() - self.modulated_conv = ModulatedConv2d( - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=demodulate, - sample_mode=sample_mode, - interpolation_mode=interpolation_mode) - self.weight = nn.Parameter(torch.zeros(1)) # for noise injection - self.activate = FusedLeakyReLU(out_channels) - - def forward(self, x, style, noise=None): - # modulate - out = self.modulated_conv(x, style) - # noise injection - if noise is None: - b, _, h, w = out.shape - noise = out.new_empty(b, 1, h, w).normal_() - out = out + self.weight * noise - # activation (with bias) - out = self.activate(out) - return out - - -class ToRGB(nn.Module): - """To RGB from features. - - Args: - in_channels (int): Channel number of input. - num_style_feat (int): Channel number of style features. - upsample (bool): Whether to upsample. Default: True. - """ - - def __init__(self, in_channels, num_style_feat, upsample=True, interpolation_mode='bilinear'): - super(ToRGB, self).__init__() - self.upsample = upsample - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - self.modulated_conv = ModulatedConv2d( - in_channels, - 3, - kernel_size=1, - num_style_feat=num_style_feat, - demodulate=False, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, x, style, skip=None): - """Forward function. - - Args: - x (Tensor): Feature tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - skip (Tensor): Base/skip tensor. Default: None. - - Returns: - Tensor: RGB images. - """ - out = self.modulated_conv(x, style) - out = out + self.bias - if skip is not None: - if self.upsample: - skip = F.interpolate( - skip, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - out = out + skip - return out - - -class ConstantInput(nn.Module): - """Constant input. - - Args: - num_channel (int): Channel number of constant input. - size (int): Spatial size of constant input. - """ - - def __init__(self, num_channel, size): - super(ConstantInput, self).__init__() - self.weight = nn.Parameter(torch.randn(1, num_channel, size, size)) - - def forward(self, batch): - out = self.weight.repeat(batch, 1, 1, 1) - return out - - -@ARCH_REGISTRY.register() -class StyleGAN2GeneratorBilinear(nn.Module): - """StyleGAN2 Generator. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of - StyleGAN2. Default: 2. - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - narrow (float): Narrow ratio for channels. Default: 1.0. - """ - - def __init__(self, - out_size, - num_style_feat=512, - num_mlp=8, - channel_multiplier=2, - lr_mlp=0.01, - narrow=1, - interpolation_mode='bilinear'): - super(StyleGAN2GeneratorBilinear, self).__init__() - # Style MLP layers - self.num_style_feat = num_style_feat - style_mlp_layers = [NormStyleCode()] - for i in range(num_mlp): - style_mlp_layers.append( - EqualLinear( - num_style_feat, num_style_feat, bias=True, bias_init_val=0, lr_mul=lr_mlp, - activation='fused_lrelu')) - self.style_mlp = nn.Sequential(*style_mlp_layers) - - channels = { - '4': int(512 * narrow), - '8': int(512 * narrow), - '16': int(512 * narrow), - '32': int(512 * narrow), - '64': int(256 * channel_multiplier * narrow), - '128': int(128 * channel_multiplier * narrow), - '256': int(64 * channel_multiplier * narrow), - '512': int(32 * channel_multiplier * narrow), - '1024': int(16 * channel_multiplier * narrow) - } - self.channels = channels - - self.constant_input = ConstantInput(channels['4'], size=4) - self.style_conv1 = StyleConv( - channels['4'], - channels['4'], - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False, interpolation_mode=interpolation_mode) - - self.log_size = int(math.log(out_size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - self.num_latent = self.log_size * 2 - 2 - - self.style_convs = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channels = channels['4'] - # noise - for layer_idx in range(self.num_layers): - resolution = 2**((layer_idx + 5) // 2) - shape = [1, 1, resolution, resolution] - self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape)) - # style convs and to_rgbs - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.style_convs.append( - StyleConv( - in_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode='upsample', - interpolation_mode=interpolation_mode)) - self.style_convs.append( - StyleConv( - out_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode)) - self.to_rgbs.append( - ToRGB(out_channels, num_style_feat, upsample=True, interpolation_mode=interpolation_mode)) - in_channels = out_channels - - def make_noise(self): - """Make noise for noise injection.""" - device = self.constant_input.weight.device - noises = [torch.randn(1, 1, 4, 4, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2**i, 2**i, device=device)) - - return noises - - def get_latent(self, x): - return self.style_mlp(x) - - def mean_latent(self, num_latent): - latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device) - latent = self.style_mlp(latent_in).mean(0, keepdim=True) - return latent - - def forward(self, - styles, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2Generator. - - Args: - styles (list[Tensor]): Sample codes of styles. - input_is_latent (bool): Whether input is latent style. - Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is - False. Default: True. - truncation (float): TODO. Default: 1. - truncation_latent (Tensor | None): TODO. Default: None. - inject_index (int | None): The injection index for mixing noise. - Default: None. - return_latents (bool): Whether to return style latents. - Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latent with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ScaledLeakyReLU(nn.Module): - """Scaled LeakyReLU. - - Args: - negative_slope (float): Negative slope. Default: 0.2. - """ - - def __init__(self, negative_slope=0.2): - super(ScaledLeakyReLU, self).__init__() - self.negative_slope = negative_slope - - def forward(self, x): - out = F.leaky_relu(x, negative_slope=self.negative_slope) - return out * math.sqrt(2) - - -class EqualConv2d(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - stride (int): Stride of the convolution. Default: 1 - padding (int): Zero-padding added to both sides of the input. - Default: 0. - bias (bool): If ``True``, adds a learnable bias to the output. - Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - """ - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, bias=True, bias_init_val=0): - super(EqualConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - out = F.conv2d( - x, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size},' - f' stride={self.stride}, padding={self.padding}, ' - f'bias={self.bias is not None})') - - -class ConvLayer(nn.Sequential): - """Conv Layer used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Kernel size. - downsample (bool): Whether downsample by a factor of 2. - Default: False. - bias (bool): Whether with bias. Default: True. - activate (bool): Whether use activateion. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - downsample=False, - bias=True, - activate=True, - interpolation_mode='bilinear'): - layers = [] - self.interpolation_mode = interpolation_mode - # downsample - if downsample: - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - layers.append( - torch.nn.Upsample(scale_factor=0.5, mode=interpolation_mode, align_corners=self.align_corners)) - stride = 1 - self.padding = kernel_size // 2 - # conv - layers.append( - EqualConv2d( - in_channels, out_channels, kernel_size, stride=stride, padding=self.padding, bias=bias - and not activate)) - # activation - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channels)) - else: - layers.append(ScaledLeakyReLU(0.2)) - - super(ConvLayer, self).__init__(*layers) - - -class ResBlock(nn.Module): - """Residual block used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - """ - - def __init__(self, in_channels, out_channels, interpolation_mode='bilinear'): - super(ResBlock, self).__init__() - - self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True) - self.conv2 = ConvLayer( - in_channels, - out_channels, - 3, - downsample=True, - interpolation_mode=interpolation_mode, - bias=True, - activate=True) - self.skip = ConvLayer( - in_channels, - out_channels, - 1, - downsample=True, - interpolation_mode=interpolation_mode, - bias=False, - activate=False) - - def forward(self, x): - out = self.conv1(x) - out = self.conv2(out) - skip = self.skip(x) - out = (out + skip) / math.sqrt(2) - return out diff --git a/spaces/allknowingroger/Image-Models-Test180/app.py b/spaces/allknowingroger/Image-Models-Test180/app.py deleted file mode 100644 index fc6c536b7f6a4e7762e54be382495abdca6c7783..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test180/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "mwiki/sd-xl-colab", - "ostris/ikea-instructions-lora-sdxl", - "abedsaad/lora-trained-xl-colab", - "livingbox/livingroom-01", - "milaidy/box", - "Jatin7698/my-pet-dog-xzg", - "Shabeena123/dog-xzg", - "gbellamy/lora-trained-xl-colab_2", - "AchyuthGamer/ImMagician", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/util.py b/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/util.py deleted file mode 100644 index a29b95356c3449dc0a56362a3ed16dbdc5531306..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/util.py +++ /dev/null @@ -1,320 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import math - -import biotite.structure -from biotite.structure.io import pdbx, pdb -from biotite.structure.residues import get_residues -from biotite.structure import filter_backbone -from biotite.structure import get_chains -from biotite.sequence import ProteinSequence -import numpy as np -from scipy.spatial import transform -from scipy.stats import special_ortho_group -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.data as data -from typing import Sequence, Tuple, List - -from esm.data import BatchConverter - - -def load_structure(fpath, chain=None): - """ - Args: - fpath: filepath to either pdb or cif file - chain: the chain id or list of chain ids to load - Returns: - biotite.structure.AtomArray - """ - if fpath.endswith('cif'): - with open(fpath) as fin: - pdbxf = pdbx.PDBxFile.read(fin) - structure = pdbx.get_structure(pdbxf, model=1) - elif fpath.endswith('pdb'): - with open(fpath) as fin: - pdbf = pdb.PDBFile.read(fin) - structure = pdb.get_structure(pdbf, model=1) - bbmask = filter_backbone(structure) - structure = structure[bbmask] - all_chains = get_chains(structure) - if len(all_chains) == 0: - raise ValueError('No chains found in the input file.') - if chain is None: - chain_ids = all_chains - elif isinstance(chain, list): - chain_ids = chain - else: - chain_ids = [chain] - for chain in chain_ids: - if chain not in all_chains: - raise ValueError(f'Chain {chain} not found in input file') - chain_filter = [a.chain_id in chain_ids for a in structure] - structure = structure[chain_filter] - return structure - - -def extract_coords_from_structure(structure: biotite.structure.AtomArray): - """ - Args: - structure: An instance of biotite AtomArray - Returns: - Tuple (coords, seq) - - coords is an L x 3 x 3 array for N, CA, C coordinates - - seq is the extracted sequence - """ - coords = get_atom_coords_residuewise(["N", "CA", "C"], structure) - residue_identities = get_residues(structure)[1] - seq = ''.join([ProteinSequence.convert_letter_3to1(r) for r in residue_identities]) - return coords, seq - - -def load_coords(fpath, chain): - """ - Args: - fpath: filepath to either pdb or cif file - chain: the chain id - Returns: - Tuple (coords, seq) - - coords is an L x 3 x 3 array for N, CA, C coordinates - - seq is the extracted sequence - """ - structure = load_structure(fpath, chain) - return extract_coords_from_structure(structure) - - -def get_atom_coords_residuewise(atoms: List[str], struct: biotite.structure.AtomArray): - """ - Example for atoms argument: ["N", "CA", "C"] - """ - def filterfn(s, axis=None): - filters = np.stack([s.atom_name == name for name in atoms], axis=1) - sum = filters.sum(0) - if not np.all(sum <= np.ones(filters.shape[1])): - raise RuntimeError("structure has multiple atoms with same name") - index = filters.argmax(0) - coords = s[index].coord - coords[sum == 0] = float("nan") - return coords - - return biotite.structure.apply_residue_wise(struct, struct, filterfn) - - -def get_sequence_loss(model, alphabet, coords, seq): - batch_converter = CoordBatchConverter(alphabet) - batch = [(coords, None, seq)] - coords, confidence, strs, tokens, padding_mask = batch_converter(batch) - - prev_output_tokens = tokens[:, :-1] - target = tokens[:, 1:] - target_padding_mask = (target == alphabet.padding_idx) - logits, _ = model.forward(coords, padding_mask, confidence, prev_output_tokens) - loss = F.cross_entropy(logits, target, reduction='none') - loss = loss[0].detach().numpy() - target_padding_mask = target_padding_mask[0].numpy() - return loss, target_padding_mask - - -def score_sequence(model, alphabet, coords, seq): - loss, target_padding_mask = get_sequence_loss(model, alphabet, coords, seq) - ll_fullseq = -np.sum(loss * ~target_padding_mask) / np.sum(~target_padding_mask) - # Also calculate average when excluding masked portions - coord_mask = np.all(np.isfinite(coords), axis=(-1, -2)) - ll_withcoord = -np.sum(loss * coord_mask) / np.sum(coord_mask) - return ll_fullseq, ll_withcoord - - -def get_encoder_output(model, alphabet, coords): - batch_converter = CoordBatchConverter(alphabet) - # the batch_converter is essential for forming the correct input format - batch = [(coords, None, None)] - coords, confidence, _, _, padding_mask = batch_converter(batch) - encoder_out = model.encoder.forward(coords, padding_mask, confidence, - return_all_hiddens=False) - # remove beginning and end (bos and eos tokens) - return encoder_out['encoder_out'][0][1:-1, 0] - - -def rotate(v, R): - """ - Rotates a vector by a rotation matrix. - - Args: - v: 3D vector, tensor of shape (length x batch_size x channels x 3) - R: rotation matrix, tensor of shape (length x batch_size x 3 x 3) - - Returns: - Rotated version of v by rotation matrix R. - """ - R = R.unsqueeze(-3) - v = v.unsqueeze(-1) - return torch.sum(v * R, dim=-2) - - -def get_rotation_frames(coords): - """ - Returns a local rotation frame defined by N, CA, C positions. - - Args: - coords: coordinates, tensor of shape (batch_size x length x 3 x 3) - where the third dimension is in order of N, CA, C - - Returns: - Local relative rotation frames in shape (batch_size x length x 3 x 3) - """ - v1 = coords[:, :, 2] - coords[:, :, 1] - v2 = coords[:, :, 0] - coords[:, :, 1] - e1 = normalize(v1, dim=-1) - u2 = v2 - e1 * torch.sum(e1 * v2, dim=-1, keepdim=True) - e2 = normalize(u2, dim=-1) - e3 = torch.cross(e1, e2, dim=-1) - R = torch.stack([e1, e2, e3], dim=-2) - return R - - -def nan_to_num(ts, val=0.0): - """ - Replaces nans in tensor with a fixed value. - """ - val = torch.tensor(val, dtype=ts.dtype, device=ts.device) - return torch.where(~torch.isfinite(ts), val, ts) - - -def rbf(values, v_min, v_max, n_bins=16): - """ - Returns RBF encodings in a new dimension at the end. - """ - rbf_centers = torch.linspace(v_min, v_max, n_bins, device=values.device) - rbf_centers = rbf_centers.view([1] * len(values.shape) + [-1]) - rbf_std = (v_max - v_min) / n_bins - v_expand = torch.unsqueeze(values, -1) - z = (values.unsqueeze(-1) - rbf_centers) / rbf_std - return torch.exp(-z ** 2) - - -def norm(tensor, dim, eps=1e-8, keepdim=False): - """ - Returns L2 norm along a dimension. - """ - return torch.sqrt( - torch.sum(torch.square(tensor), dim=dim, keepdim=keepdim) + eps) - - -def normalize(tensor, dim=-1): - """ - Normalizes a tensor along a dimension after removing nans. - """ - return nan_to_num( - torch.div(tensor, norm(tensor, dim=dim, keepdim=True)) - ) - - -class CoordBatchConverter(BatchConverter): - def __call__(self, raw_batch: Sequence[Tuple[Sequence, str]], device=None): - """ - Args: - raw_batch: List of tuples (coords, confidence, seq) - In each tuple, - coords: list of floats, shape L x 3 x 3 - confidence: list of floats, shape L; or scalar float; or None - seq: string of length L - Returns: - coords: Tensor of shape batch_size x L x 3 x 3 - confidence: Tensor of shape batch_size x L - strs: list of strings - tokens: LongTensor of shape batch_size x L - padding_mask: ByteTensor of shape batch_size x L - """ - self.alphabet.cls_idx = self.alphabet.get_idx("") - batch = [] - for coords, confidence, seq in raw_batch: - if confidence is None: - confidence = 1. - if isinstance(confidence, float) or isinstance(confidence, int): - confidence = [float(confidence)] * len(coords) - if seq is None: - seq = 'X' * len(coords) - batch.append(((coords, confidence), seq)) - - coords_and_confidence, strs, tokens = super().__call__(batch) - - # pad beginning and end of each protein due to legacy reasons - coords = [ - F.pad(torch.tensor(cd), (0, 0, 0, 0, 1, 1), value=np.inf) - for cd, _ in coords_and_confidence - ] - confidence = [ - F.pad(torch.tensor(cf), (1, 1), value=-1.) - for _, cf in coords_and_confidence - ] - coords = self.collate_dense_tensors(coords, pad_v=np.nan) - confidence = self.collate_dense_tensors(confidence, pad_v=-1.) - if device is not None: - coords = coords.to(device) - confidence = confidence.to(device) - tokens = tokens.to(device) - padding_mask = torch.isnan(coords[:,:,0,0]) - coord_mask = torch.isfinite(coords.sum(-2).sum(-1)) - confidence = confidence * coord_mask + (-1.) * padding_mask - return coords, confidence, strs, tokens, padding_mask - - def from_lists(self, coords_list, confidence_list=None, seq_list=None, device=None): - """ - Args: - coords_list: list of length batch_size, each item is a list of - floats in shape L x 3 x 3 to describe a backbone - confidence_list: one of - - None, default to highest confidence - - list of length batch_size, each item is a scalar - - list of length batch_size, each item is a list of floats of - length L to describe the confidence scores for the backbone - with values between 0. and 1. - seq_list: either None or a list of strings - Returns: - coords: Tensor of shape batch_size x L x 3 x 3 - confidence: Tensor of shape batch_size x L - strs: list of strings - tokens: LongTensor of shape batch_size x L - padding_mask: ByteTensor of shape batch_size x L - """ - batch_size = len(coords_list) - if confidence_list is None: - confidence_list = [None] * batch_size - if seq_list is None: - seq_list = [None] * batch_size - raw_batch = zip(coords_list, confidence_list, seq_list) - return self.__call__(raw_batch, device) - - @staticmethod - def collate_dense_tensors(samples, pad_v): - """ - Takes a list of tensors with the following dimensions: - [(d_11, ..., d_1K), - (d_21, ..., d_2K), - ..., - (d_N1, ..., d_NK)] - and stack + pads them into a single tensor of: - (N, max_i=1,N { d_i1 }, ..., max_i=1,N {diK}) - """ - if len(samples) == 0: - return torch.Tensor() - if len(set(x.dim() for x in samples)) != 1: - raise RuntimeError( - f"Samples has varying dimensions: {[x.dim() for x in samples]}" - ) - (device,) = tuple(set(x.device for x in samples)) # assumes all on same device - max_shape = [max(lst) for lst in zip(*[x.shape for x in samples])] - result = torch.empty( - len(samples), *max_shape, dtype=samples[0].dtype, device=device - ) - result.fill_(pad_v) - for i in range(len(samples)): - result_i = result[i] - t = samples[i] - result_i[tuple(slice(0, k) for k in t.shape)] = t - return result diff --git a/spaces/am4nsolanki/hateful-memes/README.md b/spaces/am4nsolanki/hateful-memes/README.md deleted file mode 100644 index 47dfd361b40ba12b4820bfafed75192b6bdd26e5..0000000000000000000000000000000000000000 --- a/spaces/am4nsolanki/hateful-memes/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Hateful Memes -emoji: 🦀 -colorFrom: purple -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/Adi_sidebar.py b/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/Adi_sidebar.py deleted file mode 100644 index e8b2378f1ab54f666c6a0975bbb7827c1479416c..0000000000000000000000000000000000000000 --- a/spaces/amanatid/Adi_The_ArxivGPT_with_Voice/Adi_sidebar.py +++ /dev/null @@ -1,50 +0,0 @@ -import streamlit as st - -from faq import faq - - -#def set_openai_api_key(api_key: str): -# st.session_state["OPENAI_API_KEY"] = api_key - - -def sidebar(): - with st.sidebar: - st.markdown( - "## How to use\n" - "1. Enter your [OpenAI API key](https://platform.openai.com/account/api-keys) in the corresponding box.🔑\n" # noqa: E501 - "2. Choose the Scientific Topic to dicuss🚩\n" - "3. Load the number of papers you want to investigate. \n" - "4. Choose a criterion.\n" - "5. Wait for the message 'Arxiv papers are loaded based on the criteria' to be appeared.\n" - "6. Submit your question and wait the answer to be appeared. Then, press Listen to hear Adi.\n" - ) - - ''' - api_key_input = st.text_input( - "OpenAI API Key", - type="password", - placeholder="Paste your OpenAI API key here (sk-...)", - help="You can get your API key from https://platform.openai.com/account/api-keys.", - value=st.session_state.get("OPENAI_API_KEY", ""), - ) - - if api_key_input: - set_openai_api_key(api_key_input) - ''' - st.markdown("---") - st.markdown("# About") - st.markdown( - "📚ArxivGPT allows you to commit a scientific dialogue based on" - " a specific question/criterion and the amount of data that are loaded from" - "[arxiv.org](https://arxiv.org/). " - ) - st.markdown( - "This is a work in progress. " - "You can contribute to the project on [GitHub](https://github.com/amanatid/ArxivChatBot_StreamlitApp) " - "with your feedback and suggestions💡. Due to reqular updates from the llama/streamlit team, the app might " - "crash. I try to maintain it up. In any case, please report any problem in the email below." - ) - st.markdown("Made by [amanatid](amanatid@gmail.com)") - st.markdown("---") - - faq() diff --git a/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/README.md b/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/README.md deleted file mode 100644 index 08fe31d1f26c42a3f58a1adb0c646db37113a080..0000000000000000000000000000000000000000 --- a/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AliShaker Layoutlmv3 Finetuned Wildreceipt -emoji: 🏃 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aodianyun/stable-diffusion-webui/README.md b/spaces/aodianyun/stable-diffusion-webui/README.md deleted file mode 100644 index 6df06f6c80e563295933c5db39fca79b6275063f..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/README.md +++ /dev/null @@ -1,173 +0,0 @@ ---- -title: Stable Diffusion Webui -emoji: 🌖 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: launch.py -pinned: false -duplicated_from: jackli888/stable-diffusion-webui ---- - -# Stable Diffusion web UI -A browser interface based on Gradio library for Stable Diffusion. - -![](screenshot.png) - -## Features -[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): -- Original txt2img and img2img modes -- One click install and run script (but you still must install python and git) -- Outpainting -- Inpainting -- Color Sketch -- Prompt Matrix -- Stable Diffusion Upscale -- Attention, specify parts of text that the model should pay more attention to - - a man in a ((tuxedo)) - will pay more attention to tuxedo - - a man in a (tuxedo:1.21) - alternative syntax - - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user) -- Loopback, run img2img processing multiple times -- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters -- Textual Inversion - - have as many embeddings as you want and use any names you like for them - - use multiple embeddings with different numbers of vectors per token - - works with half precision floating point numbers - - train embeddings on 8GB (also reports of 6GB working) -- Extras tab with: - - GFPGAN, neural network that fixes faces - - CodeFormer, face restoration tool as an alternative to GFPGAN - - RealESRGAN, neural network upscaler - - ESRGAN, neural network upscaler with a lot of third party models - - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - - LDSR, Latent diffusion super resolution upscaling -- Resizing aspect ratio options -- Sampling method selection - - Adjust sampler eta values (noise multiplier) - - More advanced noise setting options -- Interrupt processing at any time -- 4GB video card support (also reports of 2GB working) -- Correct seeds for batches -- Live prompt token length validation -- Generation parameters - - parameters you used to generate images are saved with that image - - in PNG chunks for PNG, in EXIF for JPEG - - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI - - can be disabled in settings - - drag and drop an image/text-parameters to promptbox -- Read Generation Parameters Button, loads parameters in promptbox to UI -- Settings page -- Running arbitrary python code from UI (must run with --allow-code to enable) -- Mouseover hints for most UI elements -- Possible to change defaults/mix/max/step values for UI elements via text config -- Tiling support, a checkbox to create images that can be tiled like textures -- Progress bar and live image generation preview - - Can use a separate neural network to produce previews with almost none VRAM or compute requirement -- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image -- Styles, a way to save part of prompt and easily apply them via dropdown later -- Variations, a way to generate same image but with tiny differences -- Seed resizing, a way to generate same image but at slightly different resolution -- CLIP interrogator, a button that tries to guess prompt from an image -- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway -- Batch Processing, process a group of files using img2img -- Img2img Alternative, reverse Euler method of cross attention control -- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions -- Reloading checkpoints on the fly -- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one -- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community -- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - - separate prompts using uppercase `AND` - - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` -- No token limit for prompts (original stable diffusion lets you use up to 75 tokens) -- DeepDanbooru integration, creates danbooru style tags for anime prompts -- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args) -- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI -- Generate forever option -- Training tab - - hypernetworks and embeddings options - - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) -- Clip skip -- Hypernetworks -- Loras (same as Hypernetworks but more pretty) -- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt. -- Can select to load a different VAE from settings screen -- Estimated completion time in progress bar -- API -- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML. -- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) -- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions -- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions -- Now without any bad letters! -- Load checkpoints in safetensors format -- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64 -- Now with a license! -- Reorder elements in the UI from settings screen -- - -## Installation and Running -Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. - -Alternatively, use online services (like Google Colab): - -- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) - -### Automatic Installation on Windows -1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH" -2. Install [git](https://git-scm.com/download/win). -3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. -4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. - -### Automatic Installation on Linux -1. Install the dependencies: -```bash -# Debian-based: -sudo apt install wget git python3 python3-venv -# Red Hat-based: -sudo dnf install wget git python3 -# Arch-based: -sudo pacman -S wget git python3 -``` -2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run: -```bash -bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) -``` -3. Run `webui.sh`. -### Installation on Apple Silicon - -Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). - -## Contributing -Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) - -## Documentation -The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). - -## Credits -Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. - -- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers -- k-diffusion - https://github.com/crowsonkb/k-diffusion.git -- GFPGAN - https://github.com/TencentARC/GFPGAN.git -- CodeFormer - https://github.com/sczhou/CodeFormer -- ESRGAN - https://github.com/xinntao/ESRGAN -- SwinIR - https://github.com/JingyunLiang/SwinIR -- Swin2SR - https://github.com/mv-lab/swin2sr -- LDSR - https://github.com/Hafiidz/latent-diffusion -- MiDaS - https://github.com/isl-org/MiDaS -- Ideas for optimizations - https://github.com/basujindal/stable-diffusion -- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. -- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) -- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention) -- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). -- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd -- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot -- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator -- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch -- xformers - https://github.com/facebookresearch/xformers -- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru -- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6) -- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix -- Security advice - RyotaK -- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. -- (You) diff --git a/spaces/arch-123/bingo/src/components/chat.tsx b/spaces/arch-123/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/tune_wavegrad.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/tune_wavegrad.py deleted file mode 100644 index 09582cea7c7962b098efcde5754a02573d18264a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/bin/tune_wavegrad.py +++ /dev/null @@ -1,103 +0,0 @@ -"""Search a good noise schedule for WaveGrad for a given number of inference iterations""" -import argparse -from itertools import product as cartesian_product - -import numpy as np -import torch -from torch.utils.data import DataLoader -from tqdm import tqdm - -from TTS.config import load_config -from TTS.utils.audio import AudioProcessor -from TTS.vocoder.datasets.preprocess import load_wav_data -from TTS.vocoder.datasets.wavegrad_dataset import WaveGradDataset -from TTS.vocoder.models import setup_model - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_path", type=str, help="Path to model checkpoint.") - parser.add_argument("--config_path", type=str, help="Path to model config file.") - parser.add_argument("--data_path", type=str, help="Path to data directory.") - parser.add_argument("--output_path", type=str, help="path for output file including file name and extension.") - parser.add_argument( - "--num_iter", - type=int, - help="Number of model inference iterations that you like to optimize noise schedule for.", - ) - parser.add_argument("--use_cuda", action="store_true", help="enable CUDA.") - parser.add_argument("--num_samples", type=int, default=1, help="Number of datasamples used for inference.") - parser.add_argument( - "--search_depth", - type=int, - default=3, - help="Search granularity. Increasing this increases the run-time exponentially.", - ) - - # load config - args = parser.parse_args() - config = load_config(args.config_path) - - # setup audio processor - ap = AudioProcessor(**config.audio) - - # load dataset - _, train_data = load_wav_data(args.data_path, 0) - train_data = train_data[: args.num_samples] - dataset = WaveGradDataset( - ap=ap, - items=train_data, - seq_len=-1, - hop_len=ap.hop_length, - pad_short=config.pad_short, - conv_pad=config.conv_pad, - is_training=True, - return_segments=False, - use_noise_augment=False, - use_cache=False, - verbose=True, - ) - loader = DataLoader( - dataset, - batch_size=1, - shuffle=False, - collate_fn=dataset.collate_full_clips, - drop_last=False, - num_workers=config.num_loader_workers, - pin_memory=False, - ) - - # setup the model - model = setup_model(config) - if args.use_cuda: - model.cuda() - - # setup optimization parameters - base_values = sorted(10 * np.random.uniform(size=args.search_depth)) - print(f" > base values: {base_values}") - exponents = 10 ** np.linspace(-6, -1, num=args.num_iter) - best_error = float("inf") - best_schedule = None # pylint: disable=C0103 - total_search_iter = len(base_values) ** args.num_iter - for base in tqdm(cartesian_product(base_values, repeat=args.num_iter), total=total_search_iter): - beta = exponents * base - model.compute_noise_level(beta) - for data in loader: - mel, audio = data - y_hat = model.inference(mel.cuda() if args.use_cuda else mel) - - if args.use_cuda: - y_hat = y_hat.cpu() - y_hat = y_hat.numpy() - - mel_hat = [] - for i in range(y_hat.shape[0]): - m = ap.melspectrogram(y_hat[i, 0])[:, :-1] - mel_hat.append(torch.from_numpy(m)) - - mel_hat = torch.stack(mel_hat) - mse = torch.sum((mel - mel_hat) ** 2).mean() - if mse.item() < best_error: - best_error = mse.item() - best_schedule = {"beta": beta} - print(f" > Found a better schedule. - MSE: {mse.item()}") - np.save(args.output_path, best_schedule) diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/finetuning.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/finetuning.md deleted file mode 100644 index c236260d0cc889207869fa5a00f18163d875b54f..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/finetuning.md +++ /dev/null @@ -1,114 +0,0 @@ -# Fine-tuning a 🐸 TTS model - -## Fine-tuning - -Fine-tuning takes a pre-trained model and retrains it to improve the model performance on a different task or dataset. -In 🐸TTS we provide different pre-trained models in different languages and different pros and cons. You can take one of -them and fine-tune it for your own dataset. This will help you in two main ways: - -1. Faster learning - - Since a pre-trained model has already learned features that are relevant for the task, it will converge faster on - a new dataset. This will reduce the cost of training and let you experiment faster. - -2. Better results with small datasets - - Deep learning models are data hungry and they give better performance with more data. However, it is not always - possible to have this abundance, especially in specific domains. For instance, the LJSpeech dataset, that we released most of - our English models with, is almost 24 hours long. It takes weeks to record this amount of data with - the help of a voice actor. - - Fine-tuning comes to the rescue in this case. You can take one of our pre-trained models and fine-tune it on your own - speech dataset and achieve reasonable results with only a couple of hours of data. - - However, note that, fine-tuning does not ensure great results. The model performance is still depends on the - {ref}`dataset quality ` and the hyper-parameters you choose for fine-tuning. Therefore, - it still takes a bit of tinkering. - - -## Steps to fine-tune a 🐸 TTS model - -1. Setup your dataset. - - You need to format your target dataset in a certain way so that 🐸TTS data loader will be able to load it for the - training. Please see {ref}`this page ` for more information about formatting. - -2. Choose the model you want to fine-tune. - - You can list the available models in the command line with - - ```bash - tts --list_models - ``` - - The command above lists the the models in a naming format as ```///```. - - Or you can manually check the `.model.json` file in the project directory. - - You should choose the model based on your requirements. Some models are fast and some are better in speech quality. - One lazy way to test a model is running the model on the hardware you want to use and see how it works. For - simple testing, you can use the `tts` command on the terminal. For more info see {ref}`here `. - -3. Download the model. - - You can download the model by using the `tts` command. If you run `tts` with a particular model, it will download it automatically - and the model path will be printed on the terminal. - - ```bash - tts --model_name tts_models/es/mai/tacotron2-DDC --text "Ola." - - > Downloading model to /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts - ... - ``` - - In the example above, we called the Spanish Tacotron model and give the sample output showing use the path where - the model is downloaded. - -4. Setup the model config for fine-tuning. - - You need to change certain fields in the model config. You have 3 options for playing with the configuration. - - 1. Edit the fields in the ```config.json``` file if you want to use ```TTS/bin/train_tts.py``` to train the model. - 2. Edit the fields in one of the training scripts in the ```recipes``` directory if you want to use python. - 3. Use the command-line arguments to override the fields like ```--coqpit.lr 0.00001``` to change the learning rate. - - Some of the important fields are as follows: - - - `datasets` field: This is set to the dataset you want to fine-tune the model on. - - `run_name` field: This is the name of the run. This is used to name the output directory and the entry in the - logging dashboard. - - `output_path` field: This is the path where the fine-tuned model is saved. - - `lr` field: You may need to use a smaller learning rate for fine-tuning to not lose the features learned by the - pre-trained model with big update steps. - - `audio` fields: Different datasets have different audio characteristics. You must check the current audio parameters and - make sure that the values reflect your dataset. For instance, your dataset might have a different audio sampling rate. - - Apart from the parameters above, you should check the whole configuration file and make sure that the values are correct for - your dataset and training. - -5. Start fine-tuning. - - Whether you use one of the training scripts under ```recipes``` folder or the ```train_tts.py``` to start - your training, you should use the ```--restore_path``` flag to specify the path to the pre-trained model. - - ```bash - CUDA_VISIBLE_DEVICES="0" python recipes/ljspeech/glow_tts/train_glowtts.py \ - --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth - ``` - - ```bash - CUDA_VISIBLE_DEVICES="0" python TTS/bin/train_tts.py \ - --config_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/config.json \ - --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth - ``` - - As stated above, you can also use command-line arguments to change the model configuration. - - - ```bash - CUDA_VISIBLE_DEVICES="0" python recipes/ljspeech/glow_tts/train_glowtts.py \ - --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth - --coqpit.run_name "glow-tts-finetune" \ - --coqpit.lr 0.00001 - ``` - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/pyramid.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/pyramid.py deleted file mode 100644 index a53b90e05ddb117507f46a4385a6d63db1e3bb28..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/pyramid.py +++ /dev/null @@ -1,19 +0,0 @@ -""" -Pyramid Pie Chart ------------------ -Altair reproduction of http://robslink.com/SAS/democd91/pyramid_pie.htm -""" -import altair as alt -import pandas as pd - -category = ['Sky', 'Shady side of a pyramid', 'Sunny side of a pyramid'] -color = ["#416D9D", "#674028", "#DEAC58"] -df = pd.DataFrame({'category': category, 'value': [75, 10, 15]}) - -alt.Chart(df).mark_arc(outerRadius=80).encode( - alt.Theta('value:Q', scale=alt.Scale(range=[2.356, 8.639])), - alt.Color('category:N', - scale=alt.Scale(domain=category, range=color), - legend=alt.Legend(title=None, orient='none', legendX=160, legendY=50)), - order='value:Q' -).properties(width=150, height=150).configure_view(strokeOpacity=0) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_data.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_data.py deleted file mode 100644 index b4b4196ffc2d3f6215be3d3706a668cb96f9c544..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_data.py +++ /dev/null @@ -1,139 +0,0 @@ -import os - -import pytest -import pandas as pd -from toolz import pipe - -from ..data import limit_rows, MaxRowsError, sample, to_values, to_json, to_csv - - -def _create_dataframe(N): - data = pd.DataFrame({"x": range(N), "y": range(N)}) - return data - - -def _create_data_with_values(N): - data = {"values": [{"x": i, "y": i + 1} for i in range(N)]} - return data - - -def test_limit_rows(): - """Test the limit_rows data transformer.""" - data = _create_dataframe(10) - result = limit_rows(data, max_rows=20) - assert data is result - with pytest.raises(MaxRowsError): - pipe(data, limit_rows(max_rows=5)) - data = _create_data_with_values(10) - result = pipe(data, limit_rows(max_rows=20)) - assert data is result - with pytest.raises(MaxRowsError): - limit_rows(data, max_rows=5) - - -def test_sample(): - """Test the sample data transformer.""" - data = _create_dataframe(20) - result = pipe(data, sample(n=10)) - assert len(result) == 10 - assert isinstance(result, pd.DataFrame) - data = _create_data_with_values(20) - result = sample(data, n=10) - assert isinstance(result, dict) - assert "values" in result - assert len(result["values"]) == 10 - data = _create_dataframe(20) - result = pipe(data, sample(frac=0.5)) - assert len(result) == 10 - assert isinstance(result, pd.DataFrame) - data = _create_data_with_values(20) - result = sample(data, frac=0.5) - assert isinstance(result, dict) - assert "values" in result - assert len(result["values"]) == 10 - - -def test_to_values(): - """Test the to_values data transformer.""" - data = _create_dataframe(10) - result = pipe(data, to_values) - assert result == {"values": data.to_dict(orient="records")} - - -def test_type_error(): - """Ensure that TypeError is raised for types other than dict/DataFrame.""" - for f in (sample, limit_rows, to_values): - with pytest.raises(TypeError): - pipe(0, f) - - -def test_dataframe_to_json(): - """Test to_json - - make certain the filename is deterministic - - make certain the file contents match the data - """ - data = _create_dataframe(10) - try: - result1 = pipe(data, to_json) - result2 = pipe(data, to_json) - filename = result1["url"] - output = pd.read_json(filename) - finally: - os.remove(filename) - - assert result1 == result2 - assert output.equals(data) - - -def test_dict_to_json(): - """Test to_json - - make certain the filename is deterministic - - make certain the file contents match the data - """ - data = _create_data_with_values(10) - try: - result1 = pipe(data, to_json) - result2 = pipe(data, to_json) - filename = result1["url"] - output = pd.read_json(filename).to_dict(orient="records") - finally: - os.remove(filename) - - assert result1 == result2 - assert data == {"values": output} - - -def test_dataframe_to_csv(): - """Test to_csv with dataframe input - - make certain the filename is deterministic - - make certain the file contents match the data - """ - data = _create_dataframe(10) - try: - result1 = pipe(data, to_csv) - result2 = pipe(data, to_csv) - filename = result1["url"] - output = pd.read_csv(filename) - finally: - os.remove(filename) - - assert result1 == result2 - assert output.equals(data) - - -def test_dict_to_csv(): - """Test to_csv with dict input - - make certain the filename is deterministic - - make certain the file contents match the data - """ - data = _create_data_with_values(10) - try: - result1 = pipe(data, to_csv) - result2 = pipe(data, to_csv) - filename = result1["url"] - output = pd.read_csv(filename).to_dict(orient="records") - finally: - os.remove(filename) - - assert result1 == result2 - assert data == {"values": output} diff --git a/spaces/asquirous/tv_desktop_classifier/app.py b/spaces/asquirous/tv_desktop_classifier/app.py deleted file mode 100644 index d88b8a8fabc80e4b8b3fd5f85285ba0d0e53cdc2..0000000000000000000000000000000000000000 --- a/spaces/asquirous/tv_desktop_classifier/app.py +++ /dev/null @@ -1,32 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner("model_tvdesktop.pkl") -labels = learn.dls.vocab - -def classify_image(img): - img = PILImage.create(img) - pred, idx, probs = learn.predict(img) - output = dict(zip(labels, map(float, probs))) - print(probs[0]) - for out in output: - val = output[out] - if val < 60: - return {"Not Sure/Others": 0} - return output - -image = gr.inputs.Image(shape=(224, 224)) -label = gr.outputs.Label() - -title = "CRT TV and Desktop Monitor Classifier" -description = "A simple image classifier." - -intf = gr.Interface( - fn=classify_image, - inputs=image, - outputs=label, - title=title, - description=description -) - -intf.launch(inline=False) diff --git a/spaces/awacke1/MultiRhymeLyricSmith/README.md b/spaces/awacke1/MultiRhymeLyricSmith/README.md deleted file mode 100644 index a164b802b2f3e610de95723d2010740d77765b78..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MultiRhymeLyricSmith/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MultiRhymeLyricSmith -emoji: 🐢 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/htsat.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/htsat.py deleted file mode 100644 index 3b856c6a43df162116a941f1b5c76e93713b276a..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/htsat.py +++ /dev/null @@ -1,1308 +0,0 @@ -# Ke Chen -# knutchen@ucsd.edu -# HTS-AT: A HIERARCHICAL TOKEN-SEMANTIC AUDIO TRANSFORMER FOR SOUND CLASSIFICATION AND DETECTION -# Some layers designed on the model -# below codes are based and referred from https://github.com/microsoft/Swin-Transformer -# Swin Transformer for Computer Vision: https://arxiv.org/pdf/2103.14030.pdf - -import torch -import torch.nn as nn -import torch.nn.functional as F -from itertools import repeat -import collections.abc -import math -import warnings - -from torch.nn.init import _calculate_fan_in_and_fan_out -import torch.utils.checkpoint as checkpoint - -import random - -from torchlibrosa.stft import Spectrogram, LogmelFilterBank -from torchlibrosa.augmentation import SpecAugmentation - -from itertools import repeat -from .utils import do_mixup, interpolate - -from .feature_fusion import iAFF, AFF, DAF - -# from PyTorch internals -def _ntuple(n): - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def drop_path(x, drop_prob: float = 0.0, training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, - the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for - changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use - 'survival rate' as the argument. - """ - if drop_prob == 0.0 or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * ( - x.ndim - 1 - ) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -class PatchEmbed(nn.Module): - """2D Image to Patch Embedding""" - - def __init__( - self, - img_size=224, - patch_size=16, - in_chans=3, - embed_dim=768, - norm_layer=None, - flatten=True, - patch_stride=16, - enable_fusion=False, - fusion_type="None", - ): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patch_stride = to_2tuple(patch_stride) - self.img_size = img_size - self.patch_size = patch_size - self.patch_stride = patch_stride - self.grid_size = ( - img_size[0] // patch_stride[0], - img_size[1] // patch_stride[1], - ) - self.num_patches = self.grid_size[0] * self.grid_size[1] - self.flatten = flatten - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.enable_fusion = enable_fusion - self.fusion_type = fusion_type - - padding = ( - (patch_size[0] - patch_stride[0]) // 2, - (patch_size[1] - patch_stride[1]) // 2, - ) - - if (self.enable_fusion) and (self.fusion_type == "channel_map"): - self.proj = nn.Conv2d( - in_chans * 4, - embed_dim, - kernel_size=patch_size, - stride=patch_stride, - padding=padding, - ) - else: - self.proj = nn.Conv2d( - in_chans, - embed_dim, - kernel_size=patch_size, - stride=patch_stride, - padding=padding, - ) - self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity() - - if (self.enable_fusion) and ( - self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"] - ): - self.mel_conv2d = nn.Conv2d( - in_chans, - embed_dim, - kernel_size=(patch_size[0], patch_size[1] * 3), - stride=(patch_stride[0], patch_stride[1] * 3), - padding=padding, - ) - if self.fusion_type == "daf_2d": - self.fusion_model = DAF() - elif self.fusion_type == "aff_2d": - self.fusion_model = AFF(channels=embed_dim, type="2D") - elif self.fusion_type == "iaff_2d": - self.fusion_model = iAFF(channels=embed_dim, type="2D") - - def forward(self, x, longer_idx=None): - if (self.enable_fusion) and ( - self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"] - ): - global_x = x[:, 0:1, :, :] - - # global processing - B, C, H, W = global_x.shape - assert ( - H == self.img_size[0] and W == self.img_size[1] - ), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - global_x = self.proj(global_x) - TW = global_x.size(-1) - if len(longer_idx) > 0: - # local processing - local_x = x[longer_idx, 1:, :, :].contiguous() - B, C, H, W = local_x.shape - local_x = local_x.view(B * C, 1, H, W) - local_x = self.mel_conv2d(local_x) - local_x = local_x.view( - B, C, local_x.size(1), local_x.size(2), local_x.size(3) - ) - local_x = local_x.permute((0, 2, 3, 1, 4)).contiguous().flatten(3) - TB, TC, TH, _ = local_x.size() - if local_x.size(-1) < TW: - local_x = torch.cat( - [ - local_x, - torch.zeros( - (TB, TC, TH, TW - local_x.size(-1)), - device=global_x.device, - ), - ], - dim=-1, - ) - else: - local_x = local_x[:, :, :, :TW] - - global_x[longer_idx] = self.fusion_model(global_x[longer_idx], local_x) - x = global_x - else: - B, C, H, W = x.shape - assert ( - H == self.img_size[0] and W == self.img_size[1] - ), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x) - - if self.flatten: - x = x.flatten(2).transpose(1, 2) # BCHW -> BNC - x = self.norm(x) - return x - - -class Mlp(nn.Module): - """MLP as used in Vision Transformer, MLP-Mixer and related networks""" - - def __init__( - self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.0, - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1.0 + math.erf(x / math.sqrt(2.0))) / 2.0 - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - "mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2, - ) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.0)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0.0, std=1.0, a=-2.0, b=2.0): - # type: (Tensor, float, float, float, float) -> Tensor - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -def variance_scaling_(tensor, scale=1.0, mode="fan_in", distribution="normal"): - fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor) - if mode == "fan_in": - denom = fan_in - elif mode == "fan_out": - denom = fan_out - elif mode == "fan_avg": - denom = (fan_in + fan_out) / 2 - - variance = scale / denom - - if distribution == "truncated_normal": - # constant is stddev of standard normal truncated to (-2, 2) - trunc_normal_(tensor, std=math.sqrt(variance) / 0.87962566103423978) - elif distribution == "normal": - tensor.normal_(std=math.sqrt(variance)) - elif distribution == "uniform": - bound = math.sqrt(3 * variance) - tensor.uniform_(-bound, bound) - else: - raise ValueError(f"invalid distribution {distribution}") - - -def lecun_normal_(tensor): - variance_scaling_(tensor, mode="fan_in", distribution="truncated_normal") - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = ( - x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - ) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view( - B, H // window_size, W // window_size, window_size, window_size, -1 - ) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r"""Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = ( - coords_flatten[:, :, None] - coords_flatten[:, None, :] - ) # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute( - 1, 2, 0 - ).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1, - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze( - 1 - ).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x, attn - - def extra_repr(self): - return f"dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}" - - -# We use the model based on Swintransformer Block, therefore we can use the swin-transformer pretrained model -class SwinTransformerBlock(nn.Module): - r"""Swin Transformer Block. - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - input_resolution, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - norm_before_mlp="ln", - ): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - self.norm_before_mlp = norm_before_mlp - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert ( - 0 <= self.shift_size < self.window_size - ), "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - if self.norm_before_mlp == "ln": - self.norm2 = nn.LayerNorm(dim) - elif self.norm_before_mlp == "bn": - self.norm2 = lambda x: nn.BatchNorm1d(dim)(x.transpose(1, 2)).transpose( - 1, 2 - ) - else: - raise NotImplementedError - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=drop, - ) - - if self.shift_size > 0: - # calculate attention mask for SW-MSA - H, W = self.input_resolution - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill( - attn_mask != 0, float(-100.0) - ).masked_fill(attn_mask == 0, float(0.0)) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def forward(self, x): - # pdb.set_trace() - H, W = self.input_resolution - # print("H: ", H) - # print("W: ", W) - # pdb.set_trace() - B, L, C = x.shape - # assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll( - x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2) - ) - else: - shifted_x = x - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows, attn = self.attn( - x_windows, mask=self.attn_mask - ) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2) - ) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x, attn - - def extra_repr(self): - return ( - f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - ) - - -class PatchMerging(nn.Module): - r"""Patch Merging Layer. - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x): - """ - x: B, H*W, C - """ - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." - - x = x.view(B, H, W, C) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self): - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - input_resolution, - depth, - num_heads, - window_size, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - norm_before_mlp="ln", - ): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - input_resolution=input_resolution, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] - if isinstance(drop_path, list) - else drop_path, - norm_layer=norm_layer, - norm_before_mlp=norm_before_mlp, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample( - input_resolution, dim=dim, norm_layer=norm_layer - ) - else: - self.downsample = None - - def forward(self, x): - attns = [] - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x) - else: - x, attn = blk(x) - if not self.training: - attns.append(attn.unsqueeze(0)) - if self.downsample is not None: - x = self.downsample(x) - if not self.training: - attn = torch.cat(attns, dim=0) - attn = torch.mean(attn, dim=0) - return x, attn - - def extra_repr(self): - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - -# The Core of HTSAT -class HTSAT_Swin_Transformer(nn.Module): - r"""HTSAT based on the Swin Transformer - Args: - spec_size (int | tuple(int)): Input Spectrogram size. Default 256 - patch_size (int | tuple(int)): Patch size. Default: 4 - path_stride (iot | tuple(int)): Patch Stride for Frequency and Time Axis. Default: 4 - in_chans (int): Number of input image channels. Default: 1 (mono) - num_classes (int): Number of classes for classification head. Default: 527 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each HTSAT-Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 8 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - config (module): The configuration Module from config.py - """ - - def __init__( - self, - spec_size=256, - patch_size=4, - patch_stride=(4, 4), - in_chans=1, - num_classes=527, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[4, 8, 16, 32], - window_size=8, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.1, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - use_checkpoint=False, - norm_before_mlp="ln", - config=None, - enable_fusion=False, - fusion_type="None", - **kwargs, - ): - super(HTSAT_Swin_Transformer, self).__init__() - - self.config = config - self.spec_size = spec_size - self.patch_stride = patch_stride - self.patch_size = patch_size - self.window_size = window_size - self.embed_dim = embed_dim - self.depths = depths - self.ape = ape - self.in_chans = in_chans - self.num_classes = num_classes - self.num_heads = num_heads - self.num_layers = len(self.depths) - self.num_features = int(self.embed_dim * 2 ** (self.num_layers - 1)) - - self.drop_rate = drop_rate - self.attn_drop_rate = attn_drop_rate - self.drop_path_rate = drop_path_rate - - self.qkv_bias = qkv_bias - self.qk_scale = None - - self.patch_norm = patch_norm - self.norm_layer = norm_layer if self.patch_norm else None - self.norm_before_mlp = norm_before_mlp - self.mlp_ratio = mlp_ratio - - self.use_checkpoint = use_checkpoint - - self.enable_fusion = enable_fusion - self.fusion_type = fusion_type - - # process mel-spec ; used only once - self.freq_ratio = self.spec_size // self.config.mel_bins - window = "hann" - center = True - pad_mode = "reflect" - ref = 1.0 - amin = 1e-10 - top_db = None - self.interpolate_ratio = 32 # Downsampled ratio - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram( - n_fft=config.window_size, - hop_length=config.hop_size, - win_length=config.window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank( - sr=config.sample_rate, - n_fft=config.window_size, - n_mels=config.mel_bins, - fmin=config.fmin, - fmax=config.fmax, - ref=ref, - amin=amin, - top_db=top_db, - freeze_parameters=True, - ) - # Spec augmenter - self.spec_augmenter = SpecAugmentation( - time_drop_width=64, - time_stripes_num=2, - freq_drop_width=8, - freq_stripes_num=2, - ) # 2 2 - self.bn0 = nn.BatchNorm2d(self.config.mel_bins) - - # split spctrogram into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=self.spec_size, - patch_size=self.patch_size, - in_chans=self.in_chans, - embed_dim=self.embed_dim, - norm_layer=self.norm_layer, - patch_stride=patch_stride, - enable_fusion=self.enable_fusion, - fusion_type=self.fusion_type, - ) - - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.grid_size - self.patches_resolution = patches_resolution - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, num_patches, self.embed_dim) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=self.drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, self.drop_path_rate, sum(self.depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(self.embed_dim * 2**i_layer), - input_resolution=( - patches_resolution[0] // (2**i_layer), - patches_resolution[1] // (2**i_layer), - ), - depth=self.depths[i_layer], - num_heads=self.num_heads[i_layer], - window_size=self.window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=self.qkv_bias, - qk_scale=self.qk_scale, - drop=self.drop_rate, - attn_drop=self.attn_drop_rate, - drop_path=dpr[ - sum(self.depths[:i_layer]) : sum(self.depths[: i_layer + 1]) - ], - norm_layer=self.norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint, - norm_before_mlp=self.norm_before_mlp, - ) - self.layers.append(layer) - - self.norm = self.norm_layer(self.num_features) - self.avgpool = nn.AdaptiveAvgPool1d(1) - self.maxpool = nn.AdaptiveMaxPool1d(1) - - SF = ( - self.spec_size - // (2 ** (len(self.depths) - 1)) - // self.patch_stride[0] - // self.freq_ratio - ) - self.tscam_conv = nn.Conv2d( - in_channels=self.num_features, - out_channels=self.num_classes, - kernel_size=(SF, 3), - padding=(0, 1), - ) - self.head = nn.Linear(num_classes, num_classes) - - if (self.enable_fusion) and ( - self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"] - ): - self.mel_conv1d = nn.Sequential( - nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2), - nn.BatchNorm1d(64), - ) - if self.fusion_type == "daf_1d": - self.fusion_model = DAF() - elif self.fusion_type == "aff_1d": - self.fusion_model = AFF(channels=64, type="1D") - elif self.fusion_type == "iaff_1d": - self.fusion_model = iAFF(channels=64, type="1D") - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {"absolute_pos_embed"} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {"relative_position_bias_table"} - - def forward_features(self, x, longer_idx=None): - # A deprecated optimization for using a hierarchical output from different blocks - - frames_num = x.shape[2] - x = self.patch_embed(x, longer_idx=longer_idx) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - for i, layer in enumerate(self.layers): - x, attn = layer(x) - # for x - x = self.norm(x) - B, N, C = x.shape - SF = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[0] - ST = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[1] - x = x.permute(0, 2, 1).contiguous().reshape(B, C, SF, ST) - B, C, F, T = x.shape - # group 2D CNN - c_freq_bin = F // self.freq_ratio - x = x.reshape(B, C, F // c_freq_bin, c_freq_bin, T) - x = x.permute(0, 1, 3, 2, 4).contiguous().reshape(B, C, c_freq_bin, -1) - # get latent_output - fine_grained_latent_output = torch.mean(x, dim=2) - fine_grained_latent_output = interpolate( - fine_grained_latent_output.permute(0, 2, 1).contiguous(), - 8 * self.patch_stride[1], - ) - - latent_output = self.avgpool(torch.flatten(x, 2)) - latent_output = torch.flatten(latent_output, 1) - - # display the attention map, if needed - - x = self.tscam_conv(x) - x = torch.flatten(x, 2) # B, C, T - - fpx = interpolate( - torch.sigmoid(x).permute(0, 2, 1).contiguous(), 8 * self.patch_stride[1] - ) - - x = self.avgpool(x) - x = torch.flatten(x, 1) - - output_dict = { - "framewise_output": fpx, # already sigmoided - "clipwise_output": torch.sigmoid(x), - "fine_grained_embedding": fine_grained_latent_output, - "embedding": latent_output, - } - - return output_dict - - def crop_wav(self, x, crop_size, spe_pos=None): - time_steps = x.shape[2] - tx = torch.zeros(x.shape[0], x.shape[1], crop_size, x.shape[3]).to(x.device) - for i in range(len(x)): - if spe_pos is None: - crop_pos = random.randint(0, time_steps - crop_size - 1) - else: - crop_pos = spe_pos - tx[i][0] = x[i, 0, crop_pos : crop_pos + crop_size, :] - return tx - - # Reshape the wavform to a img size, if you want to use the pretrained swin transformer model - def reshape_wav2img(self, x): - B, C, T, F = x.shape - target_T = int(self.spec_size * self.freq_ratio) - target_F = self.spec_size // self.freq_ratio - assert ( - T <= target_T and F <= target_F - ), "the wav size should less than or equal to the swin input size" - # to avoid bicubic zero error - if T < target_T: - x = nn.functional.interpolate( - x, (target_T, x.shape[3]), mode="bicubic", align_corners=True - ) - if F < target_F: - x = nn.functional.interpolate( - x, (x.shape[2], target_F), mode="bicubic", align_corners=True - ) - x = x.permute(0, 1, 3, 2).contiguous() - x = x.reshape( - x.shape[0], - x.shape[1], - x.shape[2], - self.freq_ratio, - x.shape[3] // self.freq_ratio, - ) - # print(x.shape) - x = x.permute(0, 1, 3, 2, 4).contiguous() - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3], x.shape[4]) - return x - - # Repeat the wavform to a img size, if you want to use the pretrained swin transformer model - def repeat_wat2img(self, x, cur_pos): - B, C, T, F = x.shape - target_T = int(self.spec_size * self.freq_ratio) - target_F = self.spec_size // self.freq_ratio - assert ( - T <= target_T and F <= target_F - ), "the wav size should less than or equal to the swin input size" - # to avoid bicubic zero error - if T < target_T: - x = nn.functional.interpolate( - x, (target_T, x.shape[3]), mode="bicubic", align_corners=True - ) - if F < target_F: - x = nn.functional.interpolate( - x, (x.shape[2], target_F), mode="bicubic", align_corners=True - ) - x = x.permute(0, 1, 3, 2).contiguous() # B C F T - x = x[:, :, :, cur_pos : cur_pos + self.spec_size] - x = x.repeat(repeats=(1, 1, 4, 1)) - return x - - def forward( - self, x: torch.Tensor, mixup_lambda=None, infer_mode=False, device=None - ): # out_feat_keys: List[str] = None): - - if self.enable_fusion and x["longer"].sum() == 0: - # if no audio is longer than 10s, then randomly select one audio to be longer - x["longer"][torch.randint(0, x["longer"].shape[0], (1,))] = True - - if not self.enable_fusion: - x = x["waveform"].to(device=device, non_blocking=True) - x = self.spectrogram_extractor(x) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - if self.training: - x = self.spec_augmenter(x) - - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - - x = self.reshape_wav2img(x) - output_dict = self.forward_features(x) - else: - longer_list = x["longer"].to(device=device, non_blocking=True) - x = x["mel_fusion"].to(device=device, non_blocking=True) - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - longer_list_idx = torch.where(longer_list)[0] - if self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"]: - new_x = x[:, 0:1, :, :].clone().contiguous() - if len(longer_list_idx) > 0: - # local processing - fusion_x_local = x[longer_list_idx, 1:, :, :].clone().contiguous() - FB, FC, FT, FF = fusion_x_local.size() - fusion_x_local = fusion_x_local.view(FB * FC, FT, FF) - fusion_x_local = torch.permute( - fusion_x_local, (0, 2, 1) - ).contiguous() - fusion_x_local = self.mel_conv1d(fusion_x_local) - fusion_x_local = fusion_x_local.view( - FB, FC, FF, fusion_x_local.size(-1) - ) - fusion_x_local = ( - torch.permute(fusion_x_local, (0, 2, 1, 3)) - .contiguous() - .flatten(2) - ) - if fusion_x_local.size(-1) < FT: - fusion_x_local = torch.cat( - [ - fusion_x_local, - torch.zeros( - (FB, FF, FT - fusion_x_local.size(-1)), - device=device, - ), - ], - dim=-1, - ) - else: - fusion_x_local = fusion_x_local[:, :, :FT] - # 1D fusion - new_x = new_x.squeeze(1).permute((0, 2, 1)).contiguous() - new_x[longer_list_idx] = self.fusion_model( - new_x[longer_list_idx], fusion_x_local - ) - x = new_x.permute((0, 2, 1)).contiguous()[:, None, :, :] - else: - x = new_x - - elif self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d", "channel_map"]: - x = x # no change - - if self.training: - x = self.spec_augmenter(x) - if self.training and mixup_lambda is not None: - x = do_mixup(x, mixup_lambda) - - x = self.reshape_wav2img(x) - output_dict = self.forward_features(x, longer_idx=longer_list_idx) - - # if infer_mode: - # # in infer mode. we need to handle different length audio input - # frame_num = x.shape[2] - # target_T = int(self.spec_size * self.freq_ratio) - # repeat_ratio = math.floor(target_T / frame_num) - # x = x.repeat(repeats=(1,1,repeat_ratio,1)) - # x = self.reshape_wav2img(x) - # output_dict = self.forward_features(x) - # else: - # if x.shape[2] > self.freq_ratio * self.spec_size: - # if self.training: - # x = self.crop_wav(x, crop_size=self.freq_ratio * self.spec_size) - # x = self.reshape_wav2img(x) - # output_dict = self.forward_features(x) - # else: - # # Change: Hard code here - # overlap_size = (x.shape[2] - 1) // 4 - # output_dicts = [] - # crop_size = (x.shape[2] - 1) // 2 - # for cur_pos in range(0, x.shape[2] - crop_size - 1, overlap_size): - # tx = self.crop_wav(x, crop_size = crop_size, spe_pos = cur_pos) - # tx = self.reshape_wav2img(tx) - # output_dicts.append(self.forward_features(tx)) - # clipwise_output = torch.zeros_like(output_dicts[0]["clipwise_output"]).float().to(x.device) - # framewise_output = torch.zeros_like(output_dicts[0]["framewise_output"]).float().to(x.device) - # for d in output_dicts: - # clipwise_output += d["clipwise_output"] - # framewise_output += d["framewise_output"] - # clipwise_output = clipwise_output / len(output_dicts) - # framewise_output = framewise_output / len(output_dicts) - # output_dict = { - # 'framewise_output': framewise_output, - # 'clipwise_output': clipwise_output - # } - # else: # this part is typically used, and most easy one - # x = self.reshape_wav2img(x) - # output_dict = self.forward_features(x) - # x = self.head(x) - - # We process the data in the dataloader part, in that here we only consider the input_T < fixed_T - - return output_dict - - -def create_htsat_model(audio_cfg, enable_fusion=False, fusion_type="None"): - try: - - assert audio_cfg.model_name in [ - "tiny", - "base", - "large", - ], "model name for HTS-AT is wrong!" - if audio_cfg.model_name == "tiny": - model = HTSAT_Swin_Transformer( - spec_size=256, - patch_size=4, - patch_stride=(4, 4), - num_classes=audio_cfg.class_num, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[4, 8, 16, 32], - window_size=8, - config=audio_cfg, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - elif audio_cfg.model_name == "base": - model = HTSAT_Swin_Transformer( - spec_size=256, - patch_size=4, - patch_stride=(4, 4), - num_classes=audio_cfg.class_num, - embed_dim=128, - depths=[2, 2, 12, 2], - num_heads=[4, 8, 16, 32], - window_size=8, - config=audio_cfg, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - elif audio_cfg.model_name == "large": - model = HTSAT_Swin_Transformer( - spec_size=256, - patch_size=4, - patch_stride=(4, 4), - num_classes=audio_cfg.class_num, - embed_dim=256, - depths=[2, 2, 12, 2], - num_heads=[4, 8, 16, 32], - window_size=8, - config=audio_cfg, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - - return model - except: - raise RuntimeError( - f"Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough." - ) diff --git a/spaces/baixing/hackathon_chatbot_openai_api/README.md b/spaces/baixing/hackathon_chatbot_openai_api/README.md deleted file mode 100644 index 2abbe1b9ee1895bc974e097bcc97c53d13bb6c33..0000000000000000000000000000000000000000 --- a/spaces/baixing/hackathon_chatbot_openai_api/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: hackathon chatbot openai api -emoji: 🐨 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: cc-by-4.0 -duplicated_from: Elfe/hackathon_chatbot_simple ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/lines/LineSegmentsGeometry.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/lines/LineSegmentsGeometry.js deleted file mode 100644 index e34a6e4c4dfb5a43ce93db3bcde6c71f0076ff10..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/lines/LineSegmentsGeometry.js +++ /dev/null @@ -1,260 +0,0 @@ -/** - * @author WestLangley / http://github.com/WestLangley - * - */ - -THREE.LineSegmentsGeometry = function () { - - THREE.InstancedBufferGeometry.call( this ); - - this.type = 'LineSegmentsGeometry'; - - var plane = new THREE.BufferGeometry(); - - var positions = [ - 1, 2, 0, 1, 2, 0, - 1, 1, 0, 1, 1, 0, - 1, 0, 0, 1, 0, 0, - 1, - 1, 0, 1, - 1, 0 ]; - var uvs = [ - 1, 2, 1, 2, - 1, 1, 1, 1, - 1, - 1, 1, - 1, - 1, - 2, 1, - 2 ]; - var index = [ 0, 2, 1, 2, 3, 1, 2, 4, 3, 4, 5, 3, 4, 6, 5, 6, 7, 5 ]; - - this.setIndex( index ); - this.addAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); - this.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) ); - -}; - -THREE.LineSegmentsGeometry.prototype = Object.assign( Object.create( THREE.InstancedBufferGeometry.prototype ), { - - constructor: THREE.LineSegmentsGeometry, - - isLineSegmentsGeometry: true, - - applyMatrix: function ( matrix ) { - - var start = this.attributes.instanceStart; - var end = this.attributes.instanceEnd; - - if ( start !== undefined ) { - - matrix.applyToBufferAttribute( start ); - - matrix.applyToBufferAttribute( end ); - - start.data.needsUpdate = true; - - } - - if ( this.boundingBox !== null ) { - - this.computeBoundingBox(); - - } - - if ( this.boundingSphere !== null ) { - - this.computeBoundingSphere(); - - } - - return this; - - }, - - setPositions: function ( array ) { - - var lineSegments; - - if ( array instanceof Float32Array ) { - - lineSegments = array; - - } else if ( Array.isArray( array ) ) { - - lineSegments = new Float32Array( array ); - - } - - var instanceBuffer = new THREE.InstancedInterleavedBuffer( lineSegments, 6, 1 ); // xyz, xyz - - this.addAttribute( 'instanceStart', new THREE.InterleavedBufferAttribute( instanceBuffer, 3, 0 ) ); // xyz - this.addAttribute( 'instanceEnd', new THREE.InterleavedBufferAttribute( instanceBuffer, 3, 3 ) ); // xyz - - // - - this.computeBoundingBox(); - this.computeBoundingSphere(); - - return this; - - }, - - setColors: function ( array ) { - - var colors; - - if ( array instanceof Float32Array ) { - - colors = array; - - } else if ( Array.isArray( array ) ) { - - colors = new Float32Array( array ); - - } - - var instanceColorBuffer = new THREE.InstancedInterleavedBuffer( colors, 6, 1 ); // rgb, rgb - - this.addAttribute( 'instanceColorStart', new THREE.InterleavedBufferAttribute( instanceColorBuffer, 3, 0 ) ); // rgb - this.addAttribute( 'instanceColorEnd', new THREE.InterleavedBufferAttribute( instanceColorBuffer, 3, 3 ) ); // rgb - - return this; - - }, - - fromWireframeGeometry: function ( geometry ) { - - this.setPositions( geometry.attributes.position.array ); - - return this; - - }, - - fromEdgesGeometry: function ( geometry ) { - - this.setPositions( geometry.attributes.position.array ); - - return this; - - }, - - fromMesh: function ( mesh ) { - - this.fromWireframeGeometry( new THREE.WireframeGeometry( mesh.geometry ) ); - - // set colors, maybe - - return this; - - }, - - fromLineSegements: function ( lineSegments ) { - - var geometry = lineSegments.geometry; - - if ( geometry.isGeometry ) { - - this.setPositions( geometry.vertices ); - - } else if ( geometry.isBufferGeometry ) { - - this.setPositions( geometry.position.array ); // assumes non-indexed - - } - - // set colors, maybe - - return this; - - }, - - computeBoundingBox: function () { - - var box = new THREE.Box3(); - - return function computeBoundingBox() { - - if ( this.boundingBox === null ) { - - this.boundingBox = new THREE.Box3(); - - } - - var start = this.attributes.instanceStart; - var end = this.attributes.instanceEnd; - - if ( start !== undefined && end !== undefined ) { - - this.boundingBox.setFromBufferAttribute( start ); - - box.setFromBufferAttribute( end ); - - this.boundingBox.union( box ); - - } - - }; - - }(), - - computeBoundingSphere: function () { - - var vector = new THREE.Vector3(); - - return function computeBoundingSphere() { - - if ( this.boundingSphere === null ) { - - this.boundingSphere = new THREE.Sphere(); - - } - - if ( this.boundingBox === null ) { - - this.computeBoundingBox(); - - } - - var start = this.attributes.instanceStart; - var end = this.attributes.instanceEnd; - - if ( start !== undefined && end !== undefined ) { - - var center = this.boundingSphere.center; - - this.boundingBox.getCenter( center ); - - var maxRadiusSq = 0; - - for ( var i = 0, il = start.count; i < il; i ++ ) { - - vector.fromBufferAttribute( start, i ); - maxRadiusSq = Math.max( maxRadiusSq, center.distanceToSquared( vector ) ); - - vector.fromBufferAttribute( end, i ); - maxRadiusSq = Math.max( maxRadiusSq, center.distanceToSquared( vector ) ); - - } - - this.boundingSphere.radius = Math.sqrt( maxRadiusSq ); - - if ( isNaN( this.boundingSphere.radius ) ) { - - console.error( 'THREE.LineSegmentsGeometry.computeBoundingSphere(): Computed radius is NaN. The instanced position data is likely to have NaN values.', this ); - - } - - } - - }; - - }(), - - toJSON: function () { - - // todo - - }, - - clone: function () { - - // todo - - }, - - copy: function ( source ) { - - // todo - - return this; - - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/UnrealBloomPass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/UnrealBloomPass.js deleted file mode 100644 index 5f04412ae641a11f6fb31b0fad88df130d3b5303..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/UnrealBloomPass.js +++ /dev/null @@ -1,391 +0,0 @@ -/** - * @author spidersharma / http://eduperiment.com/ - * - * Inspired from Unreal Engine - * https://docs.unrealengine.com/latest/INT/Engine/Rendering/PostProcessEffects/Bloom/ - */ -THREE.UnrealBloomPass = function ( resolution, strength, radius, threshold ) { - - THREE.Pass.call( this ); - - this.strength = ( strength !== undefined ) ? strength : 1; - this.radius = radius; - this.threshold = threshold; - this.resolution = ( resolution !== undefined ) ? new THREE.Vector2( resolution.x, resolution.y ) : new THREE.Vector2( 256, 256 ); - - // create color only once here, reuse it later inside the render function - this.clearColor = new THREE.Color( 0, 0, 0 ); - - // render targets - var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat }; - this.renderTargetsHorizontal = []; - this.renderTargetsVertical = []; - this.nMips = 5; - var resx = Math.round( this.resolution.x / 2 ); - var resy = Math.round( this.resolution.y / 2 ); - - this.renderTargetBright = new THREE.WebGLRenderTarget( resx, resy, pars ); - this.renderTargetBright.texture.name = "UnrealBloomPass.bright"; - this.renderTargetBright.texture.generateMipmaps = false; - - for ( var i = 0; i < this.nMips; i ++ ) { - - var renderTargetHorizonal = new THREE.WebGLRenderTarget( resx, resy, pars ); - - renderTargetHorizonal.texture.name = "UnrealBloomPass.h" + i; - renderTargetHorizonal.texture.generateMipmaps = false; - - this.renderTargetsHorizontal.push( renderTargetHorizonal ); - - var renderTargetVertical = new THREE.WebGLRenderTarget( resx, resy, pars ); - - renderTargetVertical.texture.name = "UnrealBloomPass.v" + i; - renderTargetVertical.texture.generateMipmaps = false; - - this.renderTargetsVertical.push( renderTargetVertical ); - - resx = Math.round( resx / 2 ); - - resy = Math.round( resy / 2 ); - - } - - // luminosity high pass material - - if ( THREE.LuminosityHighPassShader === undefined ) - console.error( "THREE.UnrealBloomPass relies on THREE.LuminosityHighPassShader" ); - - var highPassShader = THREE.LuminosityHighPassShader; - this.highPassUniforms = THREE.UniformsUtils.clone( highPassShader.uniforms ); - - this.highPassUniforms[ "luminosityThreshold" ].value = threshold; - this.highPassUniforms[ "smoothWidth" ].value = 0.01; - - this.materialHighPassFilter = new THREE.ShaderMaterial( { - uniforms: this.highPassUniforms, - vertexShader: highPassShader.vertexShader, - fragmentShader: highPassShader.fragmentShader, - defines: {} - } ); - - // Gaussian Blur Materials - this.separableBlurMaterials = []; - var kernelSizeArray = [ 3, 5, 7, 9, 11 ]; - var resx = Math.round( this.resolution.x / 2 ); - var resy = Math.round( this.resolution.y / 2 ); - - for ( var i = 0; i < this.nMips; i ++ ) { - - this.separableBlurMaterials.push( this.getSeperableBlurMaterial( kernelSizeArray[ i ] ) ); - - this.separableBlurMaterials[ i ].uniforms[ "texSize" ].value = new THREE.Vector2( resx, resy ); - - resx = Math.round( resx / 2 ); - - resy = Math.round( resy / 2 ); - - } - - // Composite material - this.compositeMaterial = this.getCompositeMaterial( this.nMips ); - this.compositeMaterial.uniforms[ "blurTexture1" ].value = this.renderTargetsVertical[ 0 ].texture; - this.compositeMaterial.uniforms[ "blurTexture2" ].value = this.renderTargetsVertical[ 1 ].texture; - this.compositeMaterial.uniforms[ "blurTexture3" ].value = this.renderTargetsVertical[ 2 ].texture; - this.compositeMaterial.uniforms[ "blurTexture4" ].value = this.renderTargetsVertical[ 3 ].texture; - this.compositeMaterial.uniforms[ "blurTexture5" ].value = this.renderTargetsVertical[ 4 ].texture; - this.compositeMaterial.uniforms[ "bloomStrength" ].value = strength; - this.compositeMaterial.uniforms[ "bloomRadius" ].value = 0.1; - this.compositeMaterial.needsUpdate = true; - - var bloomFactors = [ 1.0, 0.8, 0.6, 0.4, 0.2 ]; - this.compositeMaterial.uniforms[ "bloomFactors" ].value = bloomFactors; - this.bloomTintColors = [ new THREE.Vector3( 1, 1, 1 ), new THREE.Vector3( 1, 1, 1 ), new THREE.Vector3( 1, 1, 1 ), - new THREE.Vector3( 1, 1, 1 ), new THREE.Vector3( 1, 1, 1 ) ]; - this.compositeMaterial.uniforms[ "bloomTintColors" ].value = this.bloomTintColors; - - // copy material - if ( THREE.CopyShader === undefined ) { - - console.error( "THREE.BloomPass relies on THREE.CopyShader" ); - - } - - var copyShader = THREE.CopyShader; - - this.copyUniforms = THREE.UniformsUtils.clone( copyShader.uniforms ); - this.copyUniforms[ "opacity" ].value = 1.0; - - this.materialCopy = new THREE.ShaderMaterial( { - uniforms: this.copyUniforms, - vertexShader: copyShader.vertexShader, - fragmentShader: copyShader.fragmentShader, - blending: THREE.AdditiveBlending, - depthTest: false, - depthWrite: false, - transparent: true - } ); - - this.enabled = true; - this.needsSwap = false; - - this.oldClearColor = new THREE.Color(); - this.oldClearAlpha = 1; - - this.basic = new THREE.MeshBasicMaterial(); - - this.fsQuad = new THREE.Pass.FullScreenQuad( null ); - -}; - -THREE.UnrealBloomPass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), { - - constructor: THREE.UnrealBloomPass, - - dispose: function () { - - for ( var i = 0; i < this.renderTargetsHorizontal.length; i ++ ) { - - this.renderTargetsHorizontal[ i ].dispose(); - - } - - for ( var i = 0; i < this.renderTargetsVertical.length; i ++ ) { - - this.renderTargetsVertical[ i ].dispose(); - - } - - this.renderTargetBright.dispose(); - - }, - - setSize: function ( width, height ) { - - var resx = Math.round( width / 2 ); - var resy = Math.round( height / 2 ); - - this.renderTargetBright.setSize( resx, resy ); - - for ( var i = 0; i < this.nMips; i ++ ) { - - this.renderTargetsHorizontal[ i ].setSize( resx, resy ); - this.renderTargetsVertical[ i ].setSize( resx, resy ); - - this.separableBlurMaterials[ i ].uniforms[ "texSize" ].value = new THREE.Vector2( resx, resy ); - - resx = Math.round( resx / 2 ); - resy = Math.round( resy / 2 ); - - } - - }, - - render: function ( renderer, writeBuffer, readBuffer, deltaTime, maskActive ) { - - this.oldClearColor.copy( renderer.getClearColor() ); - this.oldClearAlpha = renderer.getClearAlpha(); - var oldAutoClear = renderer.autoClear; - renderer.autoClear = false; - - renderer.setClearColor( this.clearColor, 0 ); - - if ( maskActive ) renderer.context.disable( renderer.context.STENCIL_TEST ); - - // Render input to screen - - if ( this.renderToScreen ) { - - this.fsQuad.material = this.basic; - this.basic.map = readBuffer.texture; - - renderer.setRenderTarget( null ); - renderer.clear(); - this.fsQuad.render( renderer ); - - } - - // 1. Extract Bright Areas - - this.highPassUniforms[ "tDiffuse" ].value = readBuffer.texture; - this.highPassUniforms[ "luminosityThreshold" ].value = this.threshold; - this.fsQuad.material = this.materialHighPassFilter; - - renderer.setRenderTarget( this.renderTargetBright ); - renderer.clear(); - this.fsQuad.render( renderer ); - - // 2. Blur All the mips progressively - - var inputRenderTarget = this.renderTargetBright; - - for ( var i = 0; i < this.nMips; i ++ ) { - - this.fsQuad.material = this.separableBlurMaterials[ i ]; - - this.separableBlurMaterials[ i ].uniforms[ "colorTexture" ].value = inputRenderTarget.texture; - this.separableBlurMaterials[ i ].uniforms[ "direction" ].value = THREE.UnrealBloomPass.BlurDirectionX; - renderer.setRenderTarget( this.renderTargetsHorizontal[ i ] ); - renderer.clear(); - this.fsQuad.render( renderer ); - - this.separableBlurMaterials[ i ].uniforms[ "colorTexture" ].value = this.renderTargetsHorizontal[ i ].texture; - this.separableBlurMaterials[ i ].uniforms[ "direction" ].value = THREE.UnrealBloomPass.BlurDirectionY; - renderer.setRenderTarget( this.renderTargetsVertical[ i ] ); - renderer.clear(); - this.fsQuad.render( renderer ); - - inputRenderTarget = this.renderTargetsVertical[ i ]; - - } - - // Composite All the mips - - this.fsQuad.material = this.compositeMaterial; - this.compositeMaterial.uniforms[ "bloomStrength" ].value = this.strength; - this.compositeMaterial.uniforms[ "bloomRadius" ].value = this.radius; - this.compositeMaterial.uniforms[ "bloomTintColors" ].value = this.bloomTintColors; - - renderer.setRenderTarget( this.renderTargetsHorizontal[ 0 ] ); - renderer.clear(); - this.fsQuad.render( renderer ); - - // Blend it additively over the input texture - - this.fsQuad.material = this.materialCopy; - this.copyUniforms[ "tDiffuse" ].value = this.renderTargetsHorizontal[ 0 ].texture; - - if ( maskActive ) renderer.context.enable( renderer.context.STENCIL_TEST ); - - - if ( this.renderToScreen ) { - - renderer.setRenderTarget( null ); - this.fsQuad.render( renderer ); - - } else { - - renderer.setRenderTarget( readBuffer ); - this.fsQuad.render( renderer ); - - } - - // Restore renderer settings - - renderer.setClearColor( this.oldClearColor, this.oldClearAlpha ); - renderer.autoClear = oldAutoClear; - - }, - - getSeperableBlurMaterial: function ( kernelRadius ) { - - return new THREE.ShaderMaterial( { - - defines: { - "KERNEL_RADIUS": kernelRadius, - "SIGMA": kernelRadius - }, - - uniforms: { - "colorTexture": { value: null }, - "texSize": { value: new THREE.Vector2( 0.5, 0.5 ) }, - "direction": { value: new THREE.Vector2( 0.5, 0.5 ) } - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "#include \ - varying vec2 vUv;\n\ - uniform sampler2D colorTexture;\n\ - uniform vec2 texSize;\ - uniform vec2 direction;\ - \ - float gaussianPdf(in float x, in float sigma) {\ - return 0.39894 * exp( -0.5 * x * x/( sigma * sigma))/sigma;\ - }\ - void main() {\n\ - vec2 invSize = 1.0 / texSize;\ - float fSigma = float(SIGMA);\ - float weightSum = gaussianPdf(0.0, fSigma);\ - vec3 diffuseSum = texture2D( colorTexture, vUv).rgb * weightSum;\ - for( int i = 1; i < KERNEL_RADIUS; i ++ ) {\ - float x = float(i);\ - float w = gaussianPdf(x, fSigma);\ - vec2 uvOffset = direction * invSize * x;\ - vec3 sample1 = texture2D( colorTexture, vUv + uvOffset).rgb;\ - vec3 sample2 = texture2D( colorTexture, vUv - uvOffset).rgb;\ - diffuseSum += (sample1 + sample2) * w;\ - weightSum += 2.0 * w;\ - }\ - gl_FragColor = vec4(diffuseSum/weightSum, 1.0);\n\ - }" - } ); - - }, - - getCompositeMaterial: function ( nMips ) { - - return new THREE.ShaderMaterial( { - - defines: { - "NUM_MIPS": nMips - }, - - uniforms: { - "blurTexture1": { value: null }, - "blurTexture2": { value: null }, - "blurTexture3": { value: null }, - "blurTexture4": { value: null }, - "blurTexture5": { value: null }, - "dirtTexture": { value: null }, - "bloomStrength": { value: 1.0 }, - "bloomFactors": { value: null }, - "bloomTintColors": { value: null }, - "bloomRadius": { value: 0.0 } - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "varying vec2 vUv;\ - uniform sampler2D blurTexture1;\ - uniform sampler2D blurTexture2;\ - uniform sampler2D blurTexture3;\ - uniform sampler2D blurTexture4;\ - uniform sampler2D blurTexture5;\ - uniform sampler2D dirtTexture;\ - uniform float bloomStrength;\ - uniform float bloomRadius;\ - uniform float bloomFactors[NUM_MIPS];\ - uniform vec3 bloomTintColors[NUM_MIPS];\ - \ - float lerpBloomFactor(const in float factor) { \ - float mirrorFactor = 1.2 - factor;\ - return mix(factor, mirrorFactor, bloomRadius);\ - }\ - \ - void main() {\ - gl_FragColor = bloomStrength * ( lerpBloomFactor(bloomFactors[0]) * vec4(bloomTintColors[0], 1.0) * texture2D(blurTexture1, vUv) + \ - lerpBloomFactor(bloomFactors[1]) * vec4(bloomTintColors[1], 1.0) * texture2D(blurTexture2, vUv) + \ - lerpBloomFactor(bloomFactors[2]) * vec4(bloomTintColors[2], 1.0) * texture2D(blurTexture3, vUv) + \ - lerpBloomFactor(bloomFactors[3]) * vec4(bloomTintColors[3], 1.0) * texture2D(blurTexture4, vUv) + \ - lerpBloomFactor(bloomFactors[4]) * vec4(bloomTintColors[4], 1.0) * texture2D(blurTexture5, vUv) );\ - }" - } ); - - } - -} ); - -THREE.UnrealBloomPass.BlurDirectionX = new THREE.Vector2( 1.0, 0.0 ); -THREE.UnrealBloomPass.BlurDirectionY = new THREE.Vector2( 0.0, 1.0 ); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/WaterRefractionShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/WaterRefractionShader.js deleted file mode 100644 index bb3f7d26b476f145b26a7027b70e91d63cad7cb2..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/WaterRefractionShader.js +++ /dev/null @@ -1,101 +0,0 @@ -/** - * @author Mugen87 / https://github.com/Mugen87 - * - */ - -THREE.WaterRefractionShader = { - - uniforms: { - - 'color': { - type: 'c', - value: null - }, - - 'time': { - type: 'f', - value: 0 - }, - - 'tDiffuse': { - type: 't', - value: null - }, - - 'tDudv': { - type: 't', - value: null - }, - - 'textureMatrix': { - type: 'm4', - value: null - } - - }, - - vertexShader: [ - - 'uniform mat4 textureMatrix;', - - 'varying vec2 vUv;', - 'varying vec4 vUvRefraction;', - - 'void main() {', - - ' vUv = uv;', - - ' vUvRefraction = textureMatrix * vec4( position, 1.0 );', - - ' gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );', - - '}' - - ].join( '\n' ), - - fragmentShader: [ - - 'uniform vec3 color;', - 'uniform float time;', - 'uniform sampler2D tDiffuse;', - 'uniform sampler2D tDudv;', - - 'varying vec2 vUv;', - 'varying vec4 vUvRefraction;', - - 'float blendOverlay( float base, float blend ) {', - - ' return( base < 0.5 ? ( 2.0 * base * blend ) : ( 1.0 - 2.0 * ( 1.0 - base ) * ( 1.0 - blend ) ) );', - - '}', - - 'vec3 blendOverlay( vec3 base, vec3 blend ) {', - - ' return vec3( blendOverlay( base.r, blend.r ), blendOverlay( base.g, blend.g ),blendOverlay( base.b, blend.b ) );', - - '}', - - 'void main() {', - - ' float waveStrength = 0.1;', - ' float waveSpeed = 0.03;', - - // simple distortion (ripple) via dudv map (see https://www.youtube.com/watch?v=6B7IF6GOu7s) - - ' vec2 distortedUv = texture2D( tDudv, vec2( vUv.x + time * waveSpeed, vUv.y ) ).rg * waveStrength;', - ' distortedUv = vUv.xy + vec2( distortedUv.x, distortedUv.y + time * waveSpeed );', - ' vec2 distortion = ( texture2D( tDudv, distortedUv ).rg * 2.0 - 1.0 ) * waveStrength;', - - // new uv coords - - ' vec4 uv = vec4( vUvRefraction );', - ' uv.xy += distortion;', - - ' vec4 base = texture2DProj( tDiffuse, uv );', - - ' gl_FragColor = vec4( blendOverlay( base.rgb, color ), 1.0 );', - - '}' - - ].join( '\n' ) -}; diff --git a/spaces/bentrevett/named-entity-recognition/app.py b/spaces/bentrevett/named-entity-recognition/app.py deleted file mode 100644 index fa8dbf8fd0776dfff2a70ef1c1a333ba288b35df..0000000000000000000000000000000000000000 --- a/spaces/bentrevett/named-entity-recognition/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import streamlit as st -from annotated_text import annotated_text -import transformers - -ENTITY_TO_COLOR = { - 'PER': '#8ef', - 'LOC': '#faa', - 'ORG': '#afa', - 'MISC': '#fea', -} - -@st.cache(allow_output_mutation=True, show_spinner=False) -def get_pipe(): - model_name = "dslim/bert-base-NER" - model = transformers.AutoModelForTokenClassification.from_pretrained(model_name) - tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) - pipe = transformers.pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple") - return pipe - -def parse_text(text, prediction): - start = 0 - parsed_text = [] - for p in prediction: - parsed_text.append(text[start:p["start"]]) - parsed_text.append((p["word"], p["entity_group"], ENTITY_TO_COLOR[p["entity_group"]])) - start = p["end"] - parsed_text.append(text[start:]) - return parsed_text - -st.set_page_config(page_title="Named Entity Recognition") -st.title("Named Entity Recognition") -st.write("Type text into the text box and then press 'Predict' to get the named entities.") - -default_text = "My name is John Smith. I work at Microsoft. I live in Paris. My favorite painting is the Mona Lisa." - -text = st.text_area('Enter text here:', value=default_text) -submit = st.button('Predict') - -with st.spinner("Loading model..."): - pipe = get_pipe() - -if (submit and len(text.strip()) > 0) or len(text.strip()) > 0: - - prediction = pipe(text) - - parsed_text = parse_text(text, prediction) - - st.header("Prediction:") - annotated_text(*parsed_text) - - st.header('Raw values:') - st.json(prediction) diff --git a/spaces/bigcode/bigcode-models-leaderboard/README.md b/spaces/bigcode/bigcode-models-leaderboard/README.md deleted file mode 100644 index c8945a1c41784deef93b45ea2ed5de4264e850ce..0000000000000000000000000000000000000000 --- a/spaces/bigcode/bigcode-models-leaderboard/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Big Code Models Leaderboard -emoji: 📈 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false -models: -- WizardLM/WizardCoder-15B-V1.0 -- bigcode/octocoder -- bigcode/octogeex -- stabilityai/stablecode-completion-alpha-3b -- bigcode/starcoder -- bigcode/starcoderbase -- bigcode/starcoderbase-7b -- bigcode/starcoderbase-3b -- bigcode/starcoderbase-1b -- bigcode/santacoder -- replit/replit-code-v1-3b -- THUDM/codegeex2-6b -- Salesforce/codegen25-7b-multi -- Salesforce/codegen25-7b-mono -- Salesforce/codegen-16B-multi -- Deci/DeciCoder-1b -- codellama/CodeLlama-7b-hf -- codellama/CodeLlama-7b-Python-hf -- codellama/CodeLlama-7b-Instruct-hf -- codellama/CodeLlama-13b-hf -- codellama/CodeLlama-13b-Python-hf -- codellama/CodeLlama-13b-Instruct-hf -- codellama/CodeLlama-34b-hf -- codellama/CodeLlama-34b-Python-hf -- codellama/CodeLlama-34b-Instruct-hf -- phind/Phind-CodeLlama-34B-v2 -- phind/Phind-CodeLlama-34B-v1 -- phind/Phind-CodeLlama-34B-Python-v1 -- WizardLM/WizardCoder-Python-34B-V1.0 -- WizardLM/WizardCoder-Python-13B-V1.0 -- WizardLM/WizardCoder-3B-V1.0 -- WizardLM/WizardCoder-1B-V1.0 -- tiiuae/falcon-180B -- smallcloudai/Refact-1_6B-fim -- microsoft/phi-1 -- WisdomShell/CodeShell-7B ---- \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/save_images.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/save_images.py deleted file mode 100644 index 8b6c60c5bfec5947b0a9bf7f9fb87512e97e5ad6..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/save_images.py +++ /dev/null @@ -1,80 +0,0 @@ -from typing import List, Tuple -from einops import rearrange -import numpy as np, os, torch -from PIL import Image -from torchvision.utils import make_grid -import time - - -def get_output_folder(output_path, batch_folder): - out_path = os.path.join(output_path,time.strftime('%Y-%m')) - if batch_folder != "": - out_path = os.path.join(out_path, batch_folder) - os.makedirs(out_path, exist_ok=True) - return out_path - - -def save_samples( - args, x_samples: torch.Tensor, seed: int, n_rows: int -) -> Tuple[Image.Image, List[Image.Image]]: - """Function to save samples to disk. - Args: - args: Stable deforum diffusion arguments. - x_samples: Samples to save. - seed: Seed for the experiment. - n_rows: Number of rows in the grid. - Returns: - A tuple of the grid image and a list of the generated images. - ( grid_image, generated_images ) - """ - - # save samples - images = [] - grid_image = None - if args.display_samples or args.save_samples: - for index, x_sample in enumerate(x_samples): - x_sample = 255.0 * rearrange(x_sample.cpu().numpy(), "c h w -> h w c") - images.append(Image.fromarray(x_sample.astype(np.uint8))) - if args.save_samples: - images[-1].save( - os.path.join( - args.outdir, f"{args.timestring}_{index:02}_{seed}.png" - ) - ) - - # save grid - if args.display_grid or args.save_grid: - grid = torch.stack([x_samples], 0) - grid = rearrange(grid, "n b c h w -> (n b) c h w") - grid = make_grid(grid, nrow=n_rows, padding=0) - - # to image - grid = 255.0 * rearrange(grid, "c h w -> h w c").cpu().numpy() - grid_image = Image.fromarray(grid.astype(np.uint8)) - if args.save_grid: - grid_image.save( - os.path.join(args.outdir, f"{args.timestring}_{seed}_grid.png") - ) - - # return grid_image and individual sample images - return grid_image, images - -def save_image(image, image_type, filename, args, video_args, root): - if video_args.store_frames_in_ram: - root.frames_cache.append({'path':os.path.join(args.outdir, filename), 'image':image, 'image_type':image_type}) - else: - image.save(os.path.join(args.outdir, filename)) - -import cv2, gc - -def reset_frames_cache(root): - root.frames_cache = [] - gc.collect() - -def dump_frames_cache(root): - for image_cache in root.frames_cache: - if image_cache['image_type'] == 'cv2': - cv2.imwrite(image_cache['path'], image_cache['image']) - elif image_cache['image_type'] == 'PIL': - image_cache['image'].save(image_cache['path']) - # do not reset the cache since we're going to add frame erasing later function #TODO diff --git a/spaces/bingbing520/ChatGPT/assets/custom.js b/spaces/bingbing520/ChatGPT/assets/custom.js deleted file mode 100644 index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/assets/custom.js +++ /dev/null @@ -1,224 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var apSwitch = null; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight() - } - } - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - gradioContainer.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - gradioContainer.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/CMAA Specification 70 PDF Free Download A Comprehensive Resource for Crane Manufacturers and Engineers.md b/spaces/bioriAsaeru/text-to-voice/CMAA Specification 70 PDF Free Download A Comprehensive Resource for Crane Manufacturers and Engineers.md deleted file mode 100644 index 1d4bcabcda92773faf8ebb2ac7cc2ae6e120f7cb..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CMAA Specification 70 PDF Free Download A Comprehensive Resource for Crane Manufacturers and Engineers.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    Our free retirement and savings spreadsheets are designed for Microsoft Excel, ... wheel loads for Crane girder design, the hook load All, html, pdf, doc, xls, ppt.. ... After you download the Excel Document with the form to the right, enter ... S.. CMAA Spec 70/74 requires crane builders to include a safety factor ...

    -

    cmaa specification 70 pdf free download


    DOWNLOAD ►►►►► https://urloso.com/2uyPFh



    -

    the Crane Manufacturers Association of America (CMAA) Specification No.. 74.. Specifications for ... downloaded and is provided in a format that can be easily modified.. Operators also need to ... This document is for free download at website available in PDF format.. 9. gt4 racing wheel pc driver download
    7fd0e77640

    -

    (2008) Impact of sentinel lymphadenec- tomy on survival in a murine model of melanoma.. ... Gingrich JR et al.. ..
    (1992) Comparative studies between nude and SCID mice on the growth and metastatic behavior of xenografted human tumors. cmaa specification 70 pdf free download

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Deep Freeze Standard Serial Keygen Torrent.md b/spaces/bioriAsaeru/text-to-voice/Deep Freeze Standard Serial Keygen Torrent.md deleted file mode 100644 index f9ae45a00abd0d7d078b9309b0a3cf17d208f631..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Deep Freeze Standard Serial Keygen Torrent.md +++ /dev/null @@ -1,51 +0,0 @@ -
    -

    What is Deep Freeze Standard and How to Download it for Free

    - -

    Deep Freeze Standard is a software that helps you protect your computer from damage and downtime by making it indestructible. It freezes your hard disk partitions and restores them to their original state every time you restart your computer. This way, you can prevent any changes made to your system, whether they are accidental or malicious, from becoming permanent.

    - -

    Deep Freeze Standard is ideal for computing environments with 5 or less workstations, such as home, school, or small office. It provides password protection and complete security for your computer. It also supports multi-boot environments and is compatible with Windows XP, Vista, 7, 8.1, and 10.

    -

    deep freeze standard serial keygen torrent


    DOWNLOADhttps://urloso.com/2uyQtZ



    - -

    If you want to download Deep Freeze Standard for free, you might be tempted to look for a serial keygen or a torrent link online. However, this is not a safe or legal way to get the software. You might end up downloading a fake or infected file that could harm your computer or compromise your privacy. Moreover, you might violate the copyright laws and face legal consequences.

    - -

    Therefore, we recommend that you download Deep Freeze Standard from the official website of Faronics, the software developer company. You can get a free trial version of the software for 30 days and test its features and benefits. If you like it, you can buy a license key and activate the full version of the software.

    - -

    How to Download Deep Freeze Standard from the Official Website

    - -

    To download Deep Freeze Standard from the official website of Faronics, follow these steps:

    -

    - -
      -
    1. Go to https://www.faronics.com/products/deep-freeze/standard and click on the "Free Trial" button.
    2. -
    3. Fill out the form with your name, email address, phone number, country, and industry. Agree to the terms and conditions and click on "Submit".
    4. -
    5. Check your email inbox for a message from Faronics with a download link and a license key.
    6. -
    7. Click on the download link and save the file to your computer.
    8. -
    9. Run the file and follow the installation wizard.
    10. -
    11. Enter the license key when prompted and complete the installation.
    12. -
    13. Restart your computer to activate Deep Freeze Standard.
    14. -
    - -

    Congratulations! You have successfully downloaded Deep Freeze Standard for free from the official website. You can now enjoy its features and benefits for 30 days.

    - -

    How to Use Deep Freeze Standard

    - -

    To use Deep Freeze Standard, you need to know how to freeze and thaw your hard disk partitions. Freezing means locking your partitions in their original state and preventing any changes from becoming permanent. Thawing means unlocking your partitions and allowing changes to be saved.

    - -

    To freeze or thaw your hard disk partitions, follow these steps:

    - -
      -
    1. Press Ctrl+Alt+Shift+F6 on your keyboard to open the Deep Freeze Standard console.
    2. -
    3. Enter your password and click on "OK".
    4. -
    5. Select the partitions that you want to freeze or thaw by clicking on their icons.
    6. -
    7. A red icon means that the partition is frozen. A green icon means that the partition is thawed.
    8. -
    9. Click on "Apply" and then on "Reboot".
    10. -
    11. Your computer will restart and apply the changes.
    12. -
    - -

    Note: You can also use the Command Line Control Utility (DFC) to manage Deep Freeze Standard via command line interface.

    - -

    Conclusion

    - -

    In this article, we have explained what is Deep Freeze Standard and how to download it for free from the official website of Faronics. We have also shown you how to use it to freeze and thaw your hard disk partitions. We hope that this article has helped you understand and appreciate this powerful backup and recovery software.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Hidmaker Fs [FULL Version] Downl.md b/spaces/bioriAsaeru/text-to-voice/Hidmaker Fs [FULL Version] Downl.md deleted file mode 100644 index a9d0b59a78c5028466f1edde2f5456333d2b729c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hidmaker Fs [FULL Version] Downl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Hidmaker Fs [FULL Version] Downl


    Download 🗹 https://urloso.com/2uyS71



    -
    -Download Banky W The Real Wedding Party Playlists Mp3 audio/song download Starring songs by the lover and #BAAD boy, Banky W, it's the Real. ... Mount D1. the LM3524 provides dead time DOWNLOAD the HIDmaker FS Test ... Full text of National Semiconductor Linear Applications Handbook 1994 ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/dev/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/dev/README.md deleted file mode 100644 index e3a94b67ed4b4d0c2934f074802cd00f3660f9a9..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/dev/README.md +++ /dev/null @@ -1,7 +0,0 @@ - -## Some scripts for developers to use, include: - -- `run_instant_tests.sh`: run training for a few iterations. -- `run_inference_tests.sh`: run inference on a small dataset. -- `../../dev/linter.sh`: lint the codebase before commit -- `../../dev/parse_results.sh`: parse results from log file. diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/utils.py deleted file mode 100644 index d8f1d90ba8ef797be43ef2b6e63b6e8746124504..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/utils.py +++ /dev/null @@ -1,287 +0,0 @@ -import spacy -import torch -from tqdm import tqdm -import numpy as np -import itertools -nlp = spacy.load('en_core_web_md') - - -def get_iou(box1, box2): - # box1 and box2 should be in the format [x1, y1, x2, y2] - intersection = max(0, min(box1[2], box2[2]) - max(box1[0], box2[0])) * \ - max(0, min(box1[3], box2[3]) - max(box1[1], box2[1])) - area_box1 = (box1[2] - box1[0]) * (box1[3] - box1[1]) - area_box2 = (box2[2] - box2[0]) * (box2[3] - box2[1]) - union = area_box1 + area_box2 - intersection - iou = intersection / union if union > 0 else 0 - return iou - - -# def find_root(token): -# if token.pos_ == "VERB": -# return token -# while token.dep_ not in ["pobj", "nsubj", "ROOT", "npadvmod", "dobj", "det", "prep", "punct", "cc", "conj", "acl", "dep", "appos", "relcl", "advmod", "nmod", "attr"]: -# token = token.head -# return token - - -def find_root(token): - if token.pos_ == "VERB": - return token - while token.dep_ in ["compound", "amod"]: - token = token.head - return token - -def get_object_from_text(text, verbose=False): - if len(text.split(" ")) == 3: - text = text.split(" ") - return [text[0], text[-1]] - doc = nlp(text) - if verbose: - for TT in doc: - print(TT.text, TT.pos_, TT.dep_, TT.head) - roots = set() - for i, token in enumerate(doc): - roots.add(find_root(token)) - exprs = [] - roots = sorted(list(roots), key=lambda token: token.idx) - first_nsubj = True - if verbose: - print(roots) - for root in roots: - if root.pos_ not in ["NOUN", "PROPN"]: - continue - if root.dep_ not in ["pobj", "nsubj"]: - continue - if not first_nsubj and root.dep_ in ["nsubj"]: - continue - exprs.append([]) - for token in doc: - if find_root(token) == root: - exprs[-1].append(token.text) - exprs[-1] = " ".join(exprs[-1]).replace(" '", "'") - if exprs[-1] not in text: - if verbose: - print("not in text error:", exprs[-1], "#",text) - # for TT in doc: - # print(TT.text, TT.pos_, TT.dep_, TT.head) - # import pdb; pdb.set_trace() - exprs.pop() - if first_nsubj and root.dep_ in ["nsubj"]: - first_nsubj = False - if len(exprs) <= 1: - if verbose: - print("not enough exprs error:", exprs, "#",text) - return [] - return exprs - -def is_correct(input_ids, logits, tokenizer, object: str, topk=5, N=10): - answer_id = torch.tensor(tokenizer(f" {object}", add_special_tokens=False)["input_ids"]).to(input_ids.device) - answer_begin_idx = (input_ids == answer_id[0]).nonzero() - answer_idx = None - for (batch_idx, IDX) in answer_begin_idx: - try: - if (input_ids[batch_idx, IDX:IDX+len(answer_id)] == answer_id).all(): - answer_idx = list(range(IDX-1, IDX+len(answer_id)-1)) - except: - pass - if answer_idx is None: - return np.inf, False, False - res = logits[0, answer_idx].softmax(-1).sort(descending=True) - values = res.values - indices = res.indices - chosen_ids = list(itertools.product(*([list(range(N))]*len(answer_idx)))) - probs = [] - for ids in chosen_ids: - prob = 1.0 - for i, id in enumerate(ids): - prob *= values[i, id] - probs.append((prob.item(), ids)) - probs.sort(reverse=True) - answer_pos = tuple([id_array.tolist().index(idx) for id_array, idx in zip(indices, answer_id)]) - ranking = [p[1] for p in probs] - # if len(answer_idx) > 1: - # import pdb; pdb.set_trace() - try: - r = ranking.index(answer_pos) - return r, r < 1, r < 5 - except: - return np.inf, False, False - -def get_bbox(visual_box_list, batch_images, prompt, model, tokenizer, media_token_id, prebox_token_id, debug=False, return_all=False): - assert isinstance(prompt, list) and len(prompt) == 1 and isinstance(prompt[0], str) - encodings = tokenizer( - prompt, - padding="longest", - truncation=True, - return_tensors="pt", - max_length=2000, - ) - input_ids = encodings["input_ids"] - attention_mask = encodings["attention_mask"] - image_start_index_list = ((input_ids == media_token_id).nonzero(as_tuple=True)[-1] + 1).tolist() - image_start_index_list = [[x] for x in image_start_index_list] - image_nums = [1] * len(input_ids) - vision_x = batch_images.cuda() - lang_x = input_ids.cuda() - attention_mask = attention_mask.cuda() - - model.debug_id = 0 - with torch.inference_mode() and torch.cuda.amp.autocast(dtype=torch.float16): - outputs = model( - vision_x=vision_x, - lang_x=lang_x, - attention_mask=attention_mask, - labels=None, - image_nums=image_nums, - image_start_index_list=image_start_index_list, - added_bbox_list=visual_box_list, - add_box=visual_box_list is not None, - relations=None, - debug_mode=False, - ) - boxes = outputs["boxes"] - scores = outputs["scores"] - if debug: - import pdb; pdb.set_trace() - if return_all: - return boxes, scores - if len(scores) == 0: - return None, None - else: - return boxes[scores.argmax()], scores.max() - - -def _eval_text_image(text, image, model, tokenizer, image_processor, vis_embed_size, media_token_id, prebox_token_id, debug=False, objects=None): - batch_images = image_processor(image).unsqueeze(0).unsqueeze(1).unsqueeze(0) - if objects is None: - objects = get_object_from_text(text) - if len(objects) == 0: - return None, None, None - if debug: - tqdm.write(text) - tqdm.write(f"{objects}") - first_idx = text.find(objects[0]) - if first_idx == 0: - first_text = f"<|#object#|>{objects[0]}<|#endofobject#|><|#visual#|>" - else: - first_text = text[:first_idx-1] + f"<|#object#|> {objects[0]}<|#endofobject#|><|#visual#|>" - - if debug: - tqdm.write(first_text) - prompt = [f"{tokenizer.bos_token}<|#image#|>{tokenizer.pad_token*vis_embed_size}<|#endofimage#|>{first_text}"] - # import pdb; pdb.set_trace() - # print("do first get_bbox |", first_text) - first_box, first_score = get_bbox(None, batch_images, prompt, model, tokenizer, media_token_id, prebox_token_id, return_all=False) - if not model.valid and debug: - import pdb; pdb.set_trace() - if first_box is not None: - added_bbox_list = [torch.tensor(first_box).unsqueeze(0).cuda() / 224] - text = first_text + "<|#box#|><|#endofobject#|>" + text[first_idx+len(objects[0]):] - else: - added_bbox_list = [] - - final_ranks = [] - is_top1_list = [] - is_top5_list = [] - for kk, object in enumerate(objects): - if kk == 0: - continue - idx = text.find(objects[0]) - for t_i, temp in enumerate(objects[1:kk+1]): - # t_i is actually the previous one. This is not a bug - idx = text.find(temp, idx + len(objects[t_i])) - while idx+len(temp) != len(text) and (text[idx-1] == "#" or text[idx+len(temp)] == "#"): - # in case temp is box or object or visual or something like that - idx = text.find(temp, idx + len(temp)) - this_text = text[:idx-1] + "<|#object#|><|#previsual#|>" - # if this_text == "<|#object#|><|#previsual#|>": - # import pdb; pdb.set_trace() - if debug: - tqdm.write(this_text) - prompt = [f"{tokenizer.bos_token}<|#image#|>{tokenizer.pad_token*vis_embed_size}<|#endofimage#|>{this_text}"] - # import pdb; pdb.set_trace() - # print("do pre get_bbox |", this_text) - pre_boxes, pre_scores = get_bbox(added_bbox_list, batch_images, prompt, model, tokenizer, media_token_id, - prebox_token_id, return_all=True) - if not model.valid and debug: - import pdb; pdb.set_trace() - logits_list = [] - # pre_boxes = [pre_boxes[0]] - # pre_scores = [pre_scores[0]] - this_text = this_text + f"<|#prebox#|><|#object#|> {object}<|#endofobject#|>" - for pre_box, pre_score in zip(pre_boxes, pre_scores): - prompt = [f"{tokenizer.bos_token}<|#image#|>{tokenizer.pad_token*vis_embed_size}<|#endofimage#|>{this_text}"] - encodings = tokenizer( - prompt, - padding="longest", - truncation=True, - return_tensors="pt", - max_length=512, - ) - input_ids = encodings["input_ids"] - attention_mask = encodings["attention_mask"] - image_start_index_list = ((input_ids == media_token_id).nonzero(as_tuple=True)[-1] + 1).tolist() - image_start_index_list = [[x] for x in image_start_index_list] - image_nums = [1] * len(input_ids) - vision_x = batch_images.cuda() - lang_x = input_ids.cuda() - attention_mask = attention_mask.cuda() - this_added_bbox_list = added_bbox_list + [torch.tensor(pre_box).unsqueeze(0).cuda() / 224] - - with torch.cuda.amp.autocast(dtype=torch.float16) and torch.no_grad(): - outputs = model( - vision_x=vision_x, - lang_x=lang_x, - attention_mask=attention_mask, - image_nums=image_nums, - image_start_index_list=image_start_index_list, - added_bbox_list=this_added_bbox_list, - add_box=this_added_bbox_list is not None and len(this_added_bbox_list) != 0, - relations=None, - ) - if not model.valid and debug: - import pdb; pdb.set_trace() - logits_list.append([pre_score, outputs.logits]) - if debug: - answer_start_idx = (lang_x == tokenizer("<|#object#|>", add_special_tokens=False)["input_ids"][-1]).nonzero()[-1][1] - logits = outputs["logits"][0, answer_start_idx:] - tqdm.write(tokenizer.decode(logits[0].sort(descending=True).indices.tolist()[:10])) - # if debug: - # image.save("Atest.png") - # open_cv_image = np.array(image) - # open_cv_image = open_cv_image[:, :, ::-1].copy() - # if first_box is not None: - # open_cv_image = cv2.rectangle(open_cv_image, first_box[:2].astype(int), first_box[2:].astype(int), (255, 0, 0), 2) - # if pre_box is not None: - # open_cv_image = cv2.rectangle(open_cv_image, pre_box[:2].astype(int), pre_box[2:].astype(int), (0, 255, 0), 2) - # cv2.imwrite(f"Atest.png", open_cv_image) - # import pdb; pdb.set_trace() - pre_scores = np.array([x[0] for x in logits_list]) - final_probs = 0.0 - for score, (_, logits) in zip(pre_scores, logits_list): - final_probs += score * logits.softmax(-1) - assert input_ids.shape[:2] == final_probs.shape[:2] - _rank, is_top1, is_top5 = is_correct(input_ids, final_probs, tokenizer, object, topk=5) - final_ranks.append(_rank) - is_top1_list.append(is_top1) - is_top5_list.append(is_top5) - this_text = text[:idx-1] + f"<|#object#|> {object}<|#endofobject#|><|#visual#|>" - if debug: - tqdm.write(this_text) - prompt = [f"{tokenizer.bos_token}<|#image#|>{tokenizer.pad_token*vis_embed_size}<|#endofimage#|>{this_text}"] - # print("do this get_bbox |", this_text) - this_box, this_score = get_bbox(added_bbox_list, batch_images, prompt, model, tokenizer, media_token_id, prebox_token_id, return_all=False) - if not model.valid and debug: - import pdb; pdb.set_trace() - if this_box is not None: - added_bbox_list += [torch.tensor(this_box).unsqueeze(0).cuda() / 224] - text = this_text + "<|#box#|><|#endofobject#|>" + text[idx+len(object):] - return final_ranks, is_top1_list, is_top5_list - - - - -if __name__ == "__main__": - # print(get_object_from_text("there is a cookie. there is a bear. white orio cookie is next to the teddy bear. car runs on the traffic road. there is a tree.", verbose=False)) - print(get_object_from_text("President speaks to an American at a business office",verbose=True)) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/grouped_batch_sampler.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/grouped_batch_sampler.py deleted file mode 100644 index a068f7e09e6a8eee68af249d1738b4cd91a31a1f..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/distillation/grouped_batch_sampler.py +++ /dev/null @@ -1,108 +0,0 @@ -# coding=utf-8 -# Copyright 2019-present, the HuggingFace Inc. team and Facebook, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Adapted from PyTorch Vision (https://github.com/pytorch/vision/blob/master/references/detection/group_by_aspect_ratio.py) -""" -import bisect -import copy -from collections import defaultdict - -import numpy as np -from torch.utils.data import BatchSampler, Sampler - -from utils import logger - - -def _quantize(x, bins): - bins = copy.deepcopy(bins) - bins = sorted(bins) - quantized = [bisect.bisect_right(bins, y) for y in x] - return quantized - - -def create_lengths_groups(lengths, k=0): - bins = np.arange(start=3, stop=k, step=4).tolist() if k > 0 else [10] - groups = _quantize(lengths, bins) - # count number of elements per group - counts = np.unique(groups, return_counts=True)[1] - fbins = [0] + bins + [np.inf] - logger.info("Using {} as bins for aspect lengths quantization".format(fbins)) - logger.info("Count of instances per bin: {}".format(counts)) - return groups - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - Arguments: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a continuous set of integers starting from - 0, i.e. they must be in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - - def __init__(self, sampler, group_ids, batch_size): - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = group_ids - self.batch_size = batch_size - - def __iter__(self): - buffer_per_group = defaultdict(list) - samples_per_group = defaultdict(list) - - num_batches = 0 - for idx in self.sampler: - group_id = self.group_ids[idx] - buffer_per_group[group_id].append(idx) - samples_per_group[group_id].append(idx) - if len(buffer_per_group[group_id]) == self.batch_size: - yield buffer_per_group[group_id] # TODO - num_batches += 1 - del buffer_per_group[group_id] - assert len(buffer_per_group[group_id]) < self.batch_size - - # now we have run out of elements that satisfy - # the group criteria, let's return the remaining - # elements so that the size of the sampler is - # deterministic - expected_num_batches = len(self) - num_remaining = expected_num_batches - num_batches - if num_remaining > 0: - # for the remaining batches, group the batches by similar lengths - batch_idx = [] - for group_id, idxs in sorted(buffer_per_group.items(), key=lambda x: x[0]): - batch_idx.extend(idxs) - if len(batch_idx) >= self.batch_size: - yield batch_idx[: self.batch_size] - batch_idx = batch_idx[self.batch_size :] - num_remaining -= 1 - if len(batch_idx) > 0: - yield batch_idx - num_remaining -= 1 - assert num_remaining == 0 - - def __len__(self): - """ - Return the number of mini-batches rather than the number of samples. - """ - return (len(self.sampler) + self.batch_size - 1) // self.batch_size diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/special.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/special.py deleted file mode 100644 index d45abd47eeaa5a1e4fb80994d3d3deb2e6bc59d3..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/special.py +++ /dev/null @@ -1,109 +0,0 @@ -from typing import Union, Sequence, MutableSequence -from uuid import UUID as PYUUID - -from clickhouse_connect.datatypes.base import TypeDef, ClickHouseType, ArrayType, UnsupportedType -from clickhouse_connect.datatypes.registry import get_from_name -from clickhouse_connect.driver.ctypes import data_conv -from clickhouse_connect.driver.insert import InsertContext -from clickhouse_connect.driver.query import QueryContext -from clickhouse_connect.driver.types import ByteSource - -empty_uuid_b = bytes(b'\x00' * 16) - - -class UUID(ClickHouseType): - valid_formats = 'string', 'native' - np_type = 'U36' - byte_size = 16 - - def python_null(self, ctx): - return '' if self.read_format(ctx) == 'string' else PYUUID(int=0) - - def _read_column_binary(self, source: ByteSource, num_rows: int, ctx: QueryContext): - if self.read_format(ctx) == 'string': - return self._read_binary_str(source, num_rows) - return data_conv.read_uuid_col(source, num_rows) - - @staticmethod - def _read_binary_str(source: ByteSource, num_rows: int): - v = source.read_array('Q', num_rows * 2) - column = [] - app = column.append - for i in range(num_rows): - ix = i << 1 - x = f'{(v[ix] << 64 | v[ix + 1]):032x}' - app(f'{x[:8]}-{x[8:12]}-{x[12:16]}-{x[16:20]}-{x[20:]}') - return column - - # pylint: disable=too-many-branches - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, ctx: InsertContext): - first = self._first_value(column) - empty = empty_uuid_b - if isinstance(first, str) or self.write_format(ctx) == 'string': - for v in column: - if v: - x = int(v, 16) - dest += (x >> 64).to_bytes(8, 'little') + (x & 0xffffffffffffffff).to_bytes(8, 'little') - else: - dest += empty - elif isinstance(first, int): - for x in column: - if x: - dest += (x >> 64).to_bytes(8, 'little') + (x & 0xffffffffffffffff).to_bytes(8, 'little') - else: - dest += empty - elif isinstance(first, PYUUID): - for v in column: - if v: - x = v.int - dest += (x >> 64).to_bytes(8, 'little') + (x & 0xffffffffffffffff).to_bytes(8, 'little') - else: - dest += empty - elif isinstance(first, (bytes, bytearray, memoryview)): - for v in column: - if v: - dest += bytes(reversed(v[:8])) + bytes(reversed(v[8:])) - else: - dest += empty - else: - dest += empty * len(column) - - -class Nothing(ArrayType): - _array_type = 'b' - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - self.nullable = True - - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, _ctx): - dest += bytes(0x30 for _ in range(len(column))) - - -class SimpleAggregateFunction(ClickHouseType): - _slots = ('element_type',) - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - self.element_type: ClickHouseType = get_from_name(type_def.values[1]) - self._name_suffix = type_def.arg_str - self.byte_size = self.element_type.byte_size - - def _data_size(self, sample: Sequence) -> int: - return self.element_type.data_size(sample) - - def read_column_prefix(self, source: ByteSource): - return self.element_type.read_column_prefix(source) - - def write_column_prefix(self, dest: bytearray): - self.element_type.write_column_prefix(dest) - - def _read_column_binary(self, source: ByteSource, num_rows: int, ctx: QueryContext): - return self.element_type.read_column_data(source, num_rows, ctx) - - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, ctx: InsertContext): - self.element_type.write_column_data(column, dest, ctx) - - -class AggregateFunction(UnsupportedType): - pass diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/api.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/api.py deleted file mode 100644 index 63e18c4067388aedd73da5c48211b04778aafdca..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/api.py +++ /dev/null @@ -1,37 +0,0 @@ -# encoding: utf-8 - -""" -Directly exposed API functions and classes, :func:`Document` for now. -Provides a syntactically more convenient API for interacting with the -OpcPackage graph. -""" - -from __future__ import absolute_import, division, print_function - -import os - -from docx.opc.constants import CONTENT_TYPE as CT -from docx.package import Package - - -def Document(docx=None): - """ - Return a |Document| object loaded from *docx*, where *docx* can be - either a path to a ``.docx`` file (a string) or a file-like object. If - *docx* is missing or ``None``, the built-in default document "template" - is loaded. - """ - docx = _default_docx_path() if docx is None else docx - document_part = Package.open(docx).main_document_part - if document_part.content_type != CT.WML_DOCUMENT_MAIN: - tmpl = "file '%s' is not a Word file, content type is '%s'" - raise ValueError(tmpl % (docx, document_part.content_type)) - return document_part.document - - -def _default_docx_path(): - """ - Return the path to the built-in default .docx package. - """ - _thisdir = os.path.split(__file__)[0] - return os.path.join(_thisdir, 'templates', 'default.docx') diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/cli.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/cli.py deleted file mode 100644 index 0316fbce82c959cf8279f0301b90ef75869c287f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/qu2cu/cli.py +++ /dev/null @@ -1,124 +0,0 @@ -import os -import argparse -import logging -from fontTools.misc.cliTools import makeOutputFileName -from fontTools.ttLib import TTFont -from fontTools.pens.qu2cuPen import Qu2CuPen -from fontTools.pens.ttGlyphPen import TTGlyphPen -import fontTools - - -logger = logging.getLogger("fontTools.qu2cu") - - -def _font_to_cubic(input_path, output_path=None, **kwargs): - font = TTFont(input_path) - logger.info("Converting curves for %s", input_path) - - stats = {} if kwargs["dump_stats"] else None - qu2cu_kwargs = { - "stats": stats, - "max_err": kwargs["max_err_em"] * font["head"].unitsPerEm, - "all_cubic": kwargs["all_cubic"], - } - - assert "gvar" not in font, "Cannot convert variable font" - glyphSet = font.getGlyphSet() - glyphOrder = font.getGlyphOrder() - glyf = font["glyf"] - for glyphName in glyphOrder: - glyph = glyphSet[glyphName] - ttpen = TTGlyphPen(glyphSet) - pen = Qu2CuPen(ttpen, **qu2cu_kwargs) - glyph.draw(pen) - glyf[glyphName] = ttpen.glyph(dropImpliedOnCurves=True) - - font["head"].glyphDataFormat = 1 - - if kwargs["dump_stats"]: - logger.info("Stats: %s", stats) - - logger.info("Saving %s", output_path) - font.save(output_path) - - -def main(args=None): - parser = argparse.ArgumentParser(prog="qu2cu") - parser.add_argument("--version", action="version", version=fontTools.__version__) - parser.add_argument( - "infiles", - nargs="+", - metavar="INPUT", - help="one or more input TTF source file(s).", - ) - parser.add_argument("-v", "--verbose", action="count", default=0) - parser.add_argument( - "-e", - "--conversion-error", - type=float, - metavar="ERROR", - default=0.001, - help="maxiumum approximation error measured in EM (default: 0.001)", - ) - parser.add_argument( - "-c", - "--all-cubic", - default=False, - action="store_true", - help="whether to only use cubic curves", - ) - - output_parser = parser.add_mutually_exclusive_group() - output_parser.add_argument( - "-o", - "--output-file", - default=None, - metavar="OUTPUT", - help=("output filename for the converted TTF."), - ) - output_parser.add_argument( - "-d", - "--output-dir", - default=None, - metavar="DIRECTORY", - help="output directory where to save converted TTFs", - ) - - options = parser.parse_args(args) - - if not options.verbose: - level = "WARNING" - elif options.verbose == 1: - level = "INFO" - else: - level = "DEBUG" - logging.basicConfig(level=level) - - if len(options.infiles) > 1 and options.output_file: - parser.error("-o/--output-file can't be used with multile inputs") - - if options.output_dir: - output_dir = options.output_dir - if not os.path.exists(output_dir): - os.mkdir(output_dir) - elif not os.path.isdir(output_dir): - parser.error("'%s' is not a directory" % output_dir) - output_paths = [ - os.path.join(output_dir, os.path.basename(p)) for p in options.infiles - ] - elif options.output_file: - output_paths = [options.output_file] - else: - output_paths = [ - makeOutputFileName(p, overWrite=True, suffix=".cubic") - for p in options.infiles - ] - - kwargs = dict( - dump_stats=options.verbose > 0, - max_err_em=options.conversion_error, - all_cubic=options.all_cubic, - ) - - for input_path, output_path in zip(options.infiles, output_paths): - _font_to_cubic(input_path, output_path, **kwargs) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/state.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/state.py deleted file mode 100644 index 9722fa31e5240b8975af313d58bbcd83bb235fcd..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/state.py +++ /dev/null @@ -1,50 +0,0 @@ -"""gr.State() component.""" - -from __future__ import annotations - -from copy import deepcopy -from typing import Any - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import SimpleSerializable - -from gradio.components.base import IOComponent - -set_documentation_group("component") - - -@document() -class State(IOComponent, SimpleSerializable): - """ - Special hidden component that stores session state across runs of the demo by the - same user. The value of the State variable is cleared when the user refreshes the page. - - Preprocessing: No preprocessing is performed - Postprocessing: No postprocessing is performed - Demos: blocks_simple_squares - Guides: real-time-speech-recognition - """ - - allow_string_shortcut = False - - def __init__( - self, - value: Any = None, - **kwargs, - ): - """ - Parameters: - value: the initial value (of arbitrary type) of the state. The provided argument is deepcopied. If a callable is provided, the function will be called whenever the app loads to set the initial value of the state. - """ - self.stateful = True - IOComponent.__init__(self, value=deepcopy(value), **kwargs) - - -class Variable(State): - """Variable was renamed to State. This class is kept for backwards compatibility.""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def get_block_name(self): - return "state" diff --git a/spaces/cleanmaster/akagi-sovits3/models.py b/spaces/cleanmaster/akagi-sovits3/models.py deleted file mode 100644 index bdbce8445304abda792f235a4761b831fd6f4d12..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import attentions -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - g = self.emb_g(g).transpose(1,2) - - z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # o = self.dec(z_slice, g=g) - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, f0, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - - z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - - o = self.dec(z * c_mask, g=g, f0=f0) - - return o diff --git a/spaces/codertoro/gpt-academic/Dockerfile b/spaces/codertoro/gpt-academic/Dockerfile deleted file mode 100644 index b364fdbe4ec5b5c852f23237d6edc87eb8326e64..0000000000000000000000000000000000000000 --- a/spaces/codertoro/gpt-academic/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.11 -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt -COPY requirements.txt . -RUN pip3 install -r requirements.txt - -COPY . . - -CMD ["python3", "-u", "main.py"] \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/doc/examples/encode_video.c b/spaces/colakin/video-generater/public/ffmpeg/doc/examples/encode_video.c deleted file mode 100644 index 4fae146f2e9fe3503902e23c48519a2a4de8182f..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/doc/examples/encode_video.c +++ /dev/null @@ -1,216 +0,0 @@ -/* - * Copyright (c) 2001 Fabrice Bellard - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN - * THE SOFTWARE. - */ - -/** - * @file libavcodec encoding video API usage example - * @example encode_video.c - * - * Generate synthetic video data and encode it to an output file. - */ - -#include -#include -#include - -#include - -#include -#include - -static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt, - FILE *outfile) -{ - int ret; - - /* send the frame to the encoder */ - if (frame) - printf("Send frame %3"PRId64"\n", frame->pts); - - ret = avcodec_send_frame(enc_ctx, frame); - if (ret < 0) { - fprintf(stderr, "Error sending a frame for encoding\n"); - exit(1); - } - - while (ret >= 0) { - ret = avcodec_receive_packet(enc_ctx, pkt); - if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) - return; - else if (ret < 0) { - fprintf(stderr, "Error during encoding\n"); - exit(1); - } - - printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size); - fwrite(pkt->data, 1, pkt->size, outfile); - av_packet_unref(pkt); - } -} - -int main(int argc, char **argv) -{ - const char *filename, *codec_name; - const AVCodec *codec; - AVCodecContext *c= NULL; - int i, ret, x, y; - FILE *f; - AVFrame *frame; - AVPacket *pkt; - uint8_t endcode[] = { 0, 0, 1, 0xb7 }; - - if (argc <= 2) { - fprintf(stderr, "Usage: %s \n", argv[0]); - exit(0); - } - filename = argv[1]; - codec_name = argv[2]; - - /* find the mpeg1video encoder */ - codec = avcodec_find_encoder_by_name(codec_name); - if (!codec) { - fprintf(stderr, "Codec '%s' not found\n", codec_name); - exit(1); - } - - c = avcodec_alloc_context3(codec); - if (!c) { - fprintf(stderr, "Could not allocate video codec context\n"); - exit(1); - } - - pkt = av_packet_alloc(); - if (!pkt) - exit(1); - - /* put sample parameters */ - c->bit_rate = 400000; - /* resolution must be a multiple of two */ - c->width = 352; - c->height = 288; - /* frames per second */ - c->time_base = (AVRational){1, 25}; - c->framerate = (AVRational){25, 1}; - - /* emit one intra frame every ten frames - * check frame pict_type before passing frame - * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I - * then gop_size is ignored and the output of encoder - * will always be I frame irrespective to gop_size - */ - c->gop_size = 10; - c->max_b_frames = 1; - c->pix_fmt = AV_PIX_FMT_YUV420P; - - if (codec->id == AV_CODEC_ID_H264) - av_opt_set(c->priv_data, "preset", "slow", 0); - - /* open it */ - ret = avcodec_open2(c, codec, NULL); - if (ret < 0) { - fprintf(stderr, "Could not open codec: %s\n", av_err2str(ret)); - exit(1); - } - - f = fopen(filename, "wb"); - if (!f) { - fprintf(stderr, "Could not open %s\n", filename); - exit(1); - } - - frame = av_frame_alloc(); - if (!frame) { - fprintf(stderr, "Could not allocate video frame\n"); - exit(1); - } - frame->format = c->pix_fmt; - frame->width = c->width; - frame->height = c->height; - - ret = av_frame_get_buffer(frame, 0); - if (ret < 0) { - fprintf(stderr, "Could not allocate the video frame data\n"); - exit(1); - } - - /* encode 1 second of video */ - for (i = 0; i < 25; i++) { - fflush(stdout); - - /* Make sure the frame data is writable. - On the first round, the frame is fresh from av_frame_get_buffer() - and therefore we know it is writable. - But on the next rounds, encode() will have called - avcodec_send_frame(), and the codec may have kept a reference to - the frame in its internal structures, that makes the frame - unwritable. - av_frame_make_writable() checks that and allocates a new buffer - for the frame only if necessary. - */ - ret = av_frame_make_writable(frame); - if (ret < 0) - exit(1); - - /* Prepare a dummy image. - In real code, this is where you would have your own logic for - filling the frame. FFmpeg does not care what you put in the - frame. - */ - /* Y */ - for (y = 0; y < c->height; y++) { - for (x = 0; x < c->width; x++) { - frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3; - } - } - - /* Cb and Cr */ - for (y = 0; y < c->height/2; y++) { - for (x = 0; x < c->width/2; x++) { - frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2; - frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5; - } - } - - frame->pts = i; - - /* encode the image */ - encode(c, frame, pkt, f); - } - - /* flush the encoder */ - encode(c, NULL, pkt, f); - - /* Add sequence end code to have a real MPEG file. - It makes only sense because this tiny examples writes packets - directly. This is called "elementary stream" and only works for some - codecs. To create a valid file, you usually need to write packets - into a proper file format or protocol; see mux.c. - */ - if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO) - fwrite(endcode, 1, sizeof(endcode), f); - fclose(f); - - avcodec_free_context(&c); - av_frame_free(&frame); - av_packet_free(&pkt); - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ipu_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ipu_parser.c deleted file mode 100644 index 1193a65b1bb88be16060f8ca077b88a799e397f1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ipu_parser.c +++ /dev/null @@ -1,77 +0,0 @@ -/* - * IPU parser - * Copyright (c) 2020 Paul B Mahol - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * IPU parser - */ - -#include "parser.h" - -typedef struct IPUParseContext { - ParseContext pc; -} IPUParseContext; - -static int ipu_parse(AVCodecParserContext *s, AVCodecContext *avctx, - const uint8_t **poutbuf, int *poutbuf_size, - const uint8_t *buf, int buf_size) -{ - IPUParseContext *ipc = s->priv_data; - uint32_t state = ipc->pc.state; - int next = END_NOT_FOUND, i = 0; - - s->pict_type = AV_PICTURE_TYPE_NONE; - s->duration = 1; - - *poutbuf_size = 0; - *poutbuf = NULL; - - if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) { - next = buf_size; - } else { - for (; i < buf_size; i++) { - state = (state << 8) | buf[i]; - if (state == 0x1b0) { - next = i + 1; - break; - } - } - - ipc->pc.state = state; - if (ff_combine_frame(&ipc->pc, next, &buf, &buf_size) < 0) { - *poutbuf = NULL; - *poutbuf_size = 0; - return buf_size; - } - } - - *poutbuf = buf; - *poutbuf_size = buf_size; - - return next; -} - -const AVCodecParser ff_ipu_parser = { - .codec_ids = { AV_CODEC_ID_IPU }, - .priv_data_size = sizeof(IPUParseContext), - .parser_parse = ipu_parse, - .parser_close = ff_parse_close, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libilbc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libilbc.c deleted file mode 100644 index 9ca90bf0c668aed8547a48c7350d2f92d74166d1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libilbc.c +++ /dev/null @@ -1,218 +0,0 @@ -/* - * iLBC decoder/encoder stub - * Copyright (c) 2012 Martin Storsjo - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "libavutil/opt.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "encode.h" - -#ifndef LIBILBC_VERSION_MAJOR -#define LIBILBC_VERSION_MAJOR 2 -#endif - -static int get_mode(AVCodecContext *avctx) -{ - if (avctx->block_align == 38) - return 20; - else if (avctx->block_align == 50) - return 30; - else if (avctx->bit_rate > 0) - return avctx->bit_rate <= 14000 ? 30 : 20; - else - return -1; -} - -typedef struct ILBCDecContext { - const AVClass *class; -#if LIBILBC_VERSION_MAJOR < 3 - iLBC_Dec_Inst_t decoder; -#else - IlbcDecoder decoder; -#endif - int enhance; -} ILBCDecContext; - -static const AVOption ilbc_dec_options[] = { - { "enhance", "Enhance the decoded audio (adds delay)", offsetof(ILBCDecContext, enhance), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_DECODING_PARAM }, - { NULL } -}; - -static const AVClass ilbc_dec_class = { - .class_name = "libilbc", - .item_name = av_default_item_name, - .option = ilbc_dec_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static av_cold int ilbc_decode_init(AVCodecContext *avctx) -{ - ILBCDecContext *s = avctx->priv_data; - int mode; - - if ((mode = get_mode(avctx)) < 0) { - av_log(avctx, AV_LOG_ERROR, "iLBC frame mode not indicated\n"); - return AVERROR(EINVAL); - } - - WebRtcIlbcfix_InitDecode(&s->decoder, mode, s->enhance); - - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - avctx->sample_rate = 8000; - avctx->sample_fmt = AV_SAMPLE_FMT_S16; - - return 0; -} - -static int ilbc_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - ILBCDecContext *s = avctx->priv_data; - int ret; - - if (s->decoder.no_of_bytes > buf_size) { -#if LIBILBC_VERSION_MAJOR < 3 - av_log(avctx, AV_LOG_ERROR, "iLBC frame too short (%u, should be %u)\n", -#else - av_log(avctx, AV_LOG_ERROR, "iLBC frame too short (%u, should be " - "%"SIZE_SPECIFIER")\n", -#endif - buf_size, s->decoder.no_of_bytes); - return AVERROR_INVALIDDATA; - } - - frame->nb_samples = s->decoder.blockl; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - WebRtcIlbcfix_DecodeImpl((int16_t *) frame->data[0], (const uint16_t *) buf, &s->decoder, 1); - - *got_frame_ptr = 1; - - return s->decoder.no_of_bytes; -} - -const FFCodec ff_libilbc_decoder = { - .p.name = "libilbc", - CODEC_LONG_NAME("iLBC (Internet Low Bitrate Codec)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ILBC, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(ILBCDecContext), - .init = ilbc_decode_init, - FF_CODEC_DECODE_CB(ilbc_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, - .p.priv_class = &ilbc_dec_class, -}; - -typedef struct ILBCEncContext { - const AVClass *class; -#if LIBILBC_VERSION_MAJOR < 3 - iLBC_Enc_Inst_t encoder; -#else - IlbcEncoder encoder; -#endif - int mode; -} ILBCEncContext; - -static const AVOption ilbc_enc_options[] = { - { "mode", "iLBC mode (20 or 30 ms frames)", offsetof(ILBCEncContext, mode), AV_OPT_TYPE_INT, { .i64 = 20 }, 20, 30, AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM }, - { NULL } -}; - -static const AVClass ilbc_enc_class = { - .class_name = "libilbc", - .item_name = av_default_item_name, - .option = ilbc_enc_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static av_cold int ilbc_encode_init(AVCodecContext *avctx) -{ - ILBCEncContext *s = avctx->priv_data; - int mode; - - if (avctx->sample_rate != 8000) { - av_log(avctx, AV_LOG_ERROR, "Only 8000Hz sample rate supported\n"); - return AVERROR(EINVAL); - } - - if (avctx->ch_layout.nb_channels != 1) { - av_log(avctx, AV_LOG_ERROR, "Only mono supported\n"); - return AVERROR(EINVAL); - } - - if ((mode = get_mode(avctx)) > 0) - s->mode = mode; - else - s->mode = s->mode != 30 ? 20 : 30; - WebRtcIlbcfix_InitEncode(&s->encoder, s->mode); - - avctx->block_align = s->encoder.no_of_bytes; - avctx->frame_size = s->encoder.blockl; - - return 0; -} - -static int ilbc_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - ILBCEncContext *s = avctx->priv_data; - int ret; - - if ((ret = ff_alloc_packet(avctx, avpkt, 50)) < 0) - return ret; - - WebRtcIlbcfix_EncodeImpl((uint16_t *) avpkt->data, (const int16_t *) frame->data[0], &s->encoder); - - avpkt->size = s->encoder.no_of_bytes; - *got_packet_ptr = 1; - return 0; -} - -static const FFCodecDefault ilbc_encode_defaults[] = { - { "b", "0" }, - { NULL } -}; - -const FFCodec ff_libilbc_encoder = { - .p.name = "libilbc", - CODEC_LONG_NAME("iLBC (Internet Low Bitrate Codec)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ILBC, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(ILBCEncContext), - .init = ilbc_encode_init, - FF_CODEC_ENCODE_CB(ilbc_encode_frame), - .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S16, - AV_SAMPLE_FMT_NONE }, - .defaults = ilbc_encode_defaults, - .p.priv_class = &ilbc_enc_class, - .p.wrapper_name = "libbilbc", -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vc1dsp_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vc1dsp_mips.h deleted file mode 100644 index 8fcff26b14b7d9cc817ad29738132d2a68682076..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vc1dsp_mips.h +++ /dev/null @@ -1,217 +0,0 @@ -/* - * Copyright (c) 2016 Zhou Xiaoyong - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_MIPS_VC1DSP_MIPS_H -#define AVCODEC_MIPS_VC1DSP_MIPS_H - -#include "libavcodec/vc1dsp.h" - -void ff_put_vc1_mspel_mc00_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc01_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc02_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc03_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc10_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc11_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc12_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc13_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc20_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc21_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc22_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc23_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc30_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc31_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc32_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc33_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); - -void ff_avg_vc1_mspel_mc00_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc01_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc02_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc03_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc10_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc11_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc12_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc13_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc20_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc21_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc22_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc23_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc30_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc31_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc32_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc33_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); - - -void ff_put_vc1_mspel_mc00_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc01_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc02_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc03_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc10_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc11_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc12_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc13_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc20_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc21_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc22_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc23_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc30_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc31_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc32_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_put_vc1_mspel_mc33_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); - -void ff_avg_vc1_mspel_mc00_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc01_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc02_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc03_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc10_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc11_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc12_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc13_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc20_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc21_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc22_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc23_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc30_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc31_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc32_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); -void ff_avg_vc1_mspel_mc33_16_mmi(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride, int rnd); - -void ff_vc1_inv_trans_8x8_mmi(int16_t block[64]); -void ff_vc1_inv_trans_8x4_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_4x8_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_4x4_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); - -void ff_vc1_inv_trans_4x4_dc_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_4x8_dc_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_8x4_dc_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_8x8_dc_mmi(uint8_t *dest, ptrdiff_t linesize, int16_t *block); - -void ff_vc1_v_overlap_mmi(uint8_t *src, ptrdiff_t stride); -void ff_vc1_h_overlap_mmi(uint8_t *src, ptrdiff_t stride); -void ff_vc1_v_s_overlap_mmi(int16_t *top, int16_t *bottom); -void ff_vc1_h_s_overlap_mmi(int16_t *left, int16_t *right, ptrdiff_t left_stride, ptrdiff_t right_stride, int flags); - -void ff_vc1_v_loop_filter4_mmi(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter4_mmi(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_v_loop_filter8_mmi(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter8_mmi(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_v_loop_filter16_mmi(uint8_t *src, ptrdiff_t stride, int pq); -void ff_vc1_h_loop_filter16_mmi(uint8_t *src, ptrdiff_t stride, int pq); - -void ff_put_no_rnd_vc1_chroma_mc8_mmi(uint8_t *dst /* align 8 */, - const uint8_t *src /* align 1 */, - ptrdiff_t stride, int h, int x, int y); -void ff_put_no_rnd_vc1_chroma_mc4_mmi(uint8_t *dst /* align 8 */, - const uint8_t *src /* align 1 */, - ptrdiff_t stride, int h, int x, int y); -void ff_avg_no_rnd_vc1_chroma_mc8_mmi(uint8_t *dst /* align 8 */, - const uint8_t *src /* align 1 */, - ptrdiff_t stride, int h, int x, int y); -void ff_avg_no_rnd_vc1_chroma_mc4_mmi(uint8_t *dst /* align 8 */, - const uint8_t *src /* align 1 */, - ptrdiff_t stride, int h, int x, int y); - -void ff_vc1_inv_trans_8x8_msa(int16_t block[64]); -void ff_vc1_inv_trans_8x4_msa(uint8_t *dest, ptrdiff_t linesize, int16_t *block); -void ff_vc1_inv_trans_4x8_msa(uint8_t *dest, ptrdiff_t linesize, int16_t *block); - -#define FF_PUT_VC1_MSPEL_MC_MSA(hmode, vmode) \ -void ff_put_vc1_mspel_mc ## hmode ## vmode ## _msa(uint8_t *dst, \ - const uint8_t *src, \ - ptrdiff_t stride, int rnd); \ -void ff_put_vc1_mspel_mc ## hmode ## vmode ## _16_msa(uint8_t *dst, \ - const uint8_t *src, \ - ptrdiff_t stride, int rnd); - -FF_PUT_VC1_MSPEL_MC_MSA(1, 1); -FF_PUT_VC1_MSPEL_MC_MSA(1, 2); -FF_PUT_VC1_MSPEL_MC_MSA(1, 3); - -FF_PUT_VC1_MSPEL_MC_MSA(2, 1); -FF_PUT_VC1_MSPEL_MC_MSA(2, 2); -FF_PUT_VC1_MSPEL_MC_MSA(2, 3); - -FF_PUT_VC1_MSPEL_MC_MSA(3, 1); -FF_PUT_VC1_MSPEL_MC_MSA(3, 2); -FF_PUT_VC1_MSPEL_MC_MSA(3, 3); -#endif /* AVCODEC_MIPS_VC1DSP_MIPS_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tiles Hop EDM Rush Mod APK - The Ultimate Music Game with Hack Features.md b/spaces/congsaPfin/Manga-OCR/logs/Tiles Hop EDM Rush Mod APK - The Ultimate Music Game with Hack Features.md deleted file mode 100644 index 76b590fe8874e2339c76fdd5331631093cb62ba1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tiles Hop EDM Rush Mod APK - The Ultimate Music Game with Hack Features.md +++ /dev/null @@ -1,144 +0,0 @@ -
    -

    Tiles Hop: EDM Rush Hack Mod APK - How to Download and Play

    -

    If you love music and rhythm games, you might have heard of Tiles Hop: EDM Rush, a popular mobile game that lets you bounce a ball on different tiles according to the beats of your favorite songs. But did you know that there is a hacked version of this game that gives you unlimited money, unlocked songs, and more? In this article, we will tell you everything you need to know about Tiles Hop: EDM Rush Hack Mod APK, how to download and install it, and how to play it. Let's get started!

    -

    tiles hop edm rush hack mod apk


    Download File >>> https://urlca.com/2uOgts



    -

    What is Tiles Hop: EDM Rush?

    -

    Tiles Hop: EDM Rush is a music game developed by AMANOTES PTE LTD, a studio that specializes in creating casual games with catchy tunes. The game was released in 2018 and has since gained millions of fans around the world. The game is available for both Android and iOS devices, and you can download it for free from the Google Play Store or the App Store.

    -

    Features of Tiles Hop: EDM Rush

    -

    Some of the features that make Tiles Hop: EDM Rush a fun and addictive game are:

    -
      -
    • A variety of songs: You can choose from hundreds of songs in different genres, such as pop, rock, EDM, classical, hip hop, and more. You can also upload your own songs or use the online mode to play with songs from YouTube.
    • -
    • A simple but challenging gameplay: All you have to do is tap the screen to make the ball jump from one tile to another, following the rhythm of the music. The game gets harder as you progress, with faster speeds, more obstacles, and different patterns.
    • -
    • A colorful and dynamic graphics: The game has a vibrant and eye-catching design, with different backgrounds, themes, and effects that change according to the song. You can also customize your ball with different skins and trails.
    • -
    • A rewarding system: The game rewards you with coins, gems, stars, and achievements as you complete levels, unlock songs, and improve your skills. You can use these resources to buy new balls, themes, and songs.
    • -
    -

    How to play Tiles Hop: EDM Rush

    -

    To play Tiles Hop: EDM Rush, you need to follow these steps:

    -
      -
    1. Select a song from the list or upload your own.
    2. -
    3. Tap the screen to start the game.
    4. -
    5. Tap the screen again to make the ball jump from one tile to another.
    6. -
    7. Try to avoid falling off the tiles or hitting the obstacles.
    8. -
    9. Collect coins, gems, stars, and bonuses along the way.
    10. -
    11. Finish the level and get your score.
    12. -
    13. Repeat with another song or level.
    14. -
    -

    What is Tiles Hop: EDM Rush Hack Mod APK?

    -

    Tiles Hop: EDM Rush Hack Mod APK is a modified version of the original game that gives you some advantages that are not available in the official version. For example, with this hack mod apk, you can get:

    -

    tiles hop edm rush mod apk unlimited money
    -tiles hop edm rush hack apk download
    -tiles hop edm rush mod apk latest version
    -tiles hop edm rush hack apk free
    -tiles hop edm rush mod apk android 1
    -tiles hop edm rush hack apk no root
    -tiles hop edm rush mod apk revdl
    -tiles hop edm rush hack apk online
    -tiles hop edm rush mod apk 5.4.5
    -tiles hop edm rush hack apk 2023
    -tiles hop edm rush mod apk rexdl
    -tiles hop edm rush hack apk ios
    -tiles hop edm rush mod apk happymod
    -tiles hop edm rush hack apk unlimited gems
    -tiles hop edm rush mod apk pure
    -tiles hop edm rush hack apk offline
    -tiles hop edm rush mod apk vip unlocked
    -tiles hop edm rush hack apk 5.4.4
    -tiles hop edm rush mod apk all songs unlocked
    -tiles hop edm rush hack apk no ads
    -tiles hop edm rush mod apk premium
    -tiles hop edm rush hack apk 5.4.5
    -tiles hop edm rush mod apk 2023
    -tiles hop edm rush hack apk latest
    -tiles hop edm rush mod apk no ads
    -tiles hop edm rush hack apk android 1
    -tiles hop edm rush mod apk unlimited gems
    -tiles hop edm rush hack apk revdl
    -tiles hop edm rush mod apk offline
    -tiles hop edm rush hack apk happymod
    -tiles hop edm rush mod apk online
    -tiles hop edm rush hack apk rexdl
    -tiles hop edm rush mod apk free download
    -tiles hop edm rush hack apk pure
    -tiles hop edm rush mod apk ios
    -tiles hop edm rush hack apk vip unlocked
    -tiles hop edm rush mod apk 5.4.4
    -tiles hop edm rush hack apk all songs unlocked
    -tiles hop edm rush mod apk no root
    -tiles hop edm rush hack apk premium
    -tiles hop edm rush mod apk download
    -tiles hop edm rush hack mod unlimited money
    -tiles hop edm rush mod hack free gems
    -tiles hop edm rush hack mod latest version
    -tiles hop edm rush mod hack no ads
    -tiles hop edm rush hack mod online
    -tiles hop edm rush mod hack offline
    -tiles hop edm rush hack mod 2023
    -tiles hop edm rush mod hack 5.4.5

    -

    Benefits of Tiles Hop: EDM Rush Hack Mod APK

    -
    • Unlimited money: You can get unlimited coins and gems that you can use to buy new balls, themes, and songs. You don't have to worry about running out of resources or spending real money.
    • -
    • Unlocked songs: You can access all the songs in the game, including the premium ones that require gems or stars to unlock. You can enjoy playing with any song you like, without any restrictions.
    • -
    • No ads: You can play the game without any annoying ads that interrupt your gameplay or consume your data. You can have a smooth and uninterrupted experience.
    • -
    -

    Risks of Tiles Hop: EDM Rush Hack Mod APK

    -

    However, before you download and install Tiles Hop: EDM Rush Hack Mod APK, you should also be aware of some risks that come with it. For example:

    -
      -
    • It is not safe or legal: Tiles Hop: EDM Rush Hack Mod APK is not an official version of the game, and it is not authorized or endorsed by the developers. It is a third-party app that may contain malware, viruses, or spyware that can harm your device or steal your personal information. It may also violate the terms of service and privacy policy of the game, and you may face legal consequences or get banned from the game if you use it.
    • -
    • It is not compatible or stable: Tiles Hop: EDM Rush Hack Mod APK may not work properly with your device or the latest version of the game. It may crash, freeze, or glitch during the gameplay, and you may lose your progress or data. It may also conflict with other apps or games on your device, and cause performance issues or errors.
    • -
    • It is not fun or fair: Tiles Hop: EDM Rush Hack Mod APK may ruin the fun and challenge of the game, as you don't have to work hard or improve your skills to get rewards or achievements. It may also give you an unfair advantage over other players who play the game legitimately, and make the game boring or unbalanced.
    • -
    -

    How to download and install Tiles Hop: EDM Rush Hack Mod APK?

    -

    If you still want to try Tiles Hop: EDM Rush Hack Mod APK, despite the risks, you need to follow these steps:

    -

    Steps to download and install Tiles Hop: EDM Rush Hack Mod APK

    -
      -
    1. Find a reliable and trustworthy website that offers Tiles Hop: EDM Rush Hack Mod APK for free. You can search on Google or use some of the links below:
    2. -
        -
      • [Tiles Hop: EDM Rush MOD APK 3.6.0 (Unlimited Money) Download]
      • -
      • [Tiles Hop: EDM Rush v3.6.0 (MOD, Unlimited Money) for Android]
      • -
      • [Tiles Hop: EDM Rush MOD APK 3.6.0 (Unlimited Money) - Apkdone]
      • -
      -
    3. Download the APK file from the website to your device. Make sure you have enough storage space and a stable internet connection.
    4. -
    5. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    6. -
    7. Locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
    8. -
    9. Launch the game from your app drawer and enjoy playing with Tiles Hop: EDM Rush Hack Mod APK.
    10. -
    -

    Tips to avoid malware and viruses when downloading Tiles Hop: EDM Rush Hack Mod APK

    -

    To protect your device and data from malware and viruses when downloading Tiles Hop: EDM Rush Hack Mod APK, you should follow these tips:

    -
      -
    • Use a reputable antivirus app: You should install a reliable antivirus app on your device and scan the APK file before installing it. This will help you detect and remove any malicious code or threats that may be hidden in the file.
    • -
    • Use a VPN app: You should use a virtual private network (VPN) app on your device and connect to a secure server when downloading the APK file. This will help you hide your IP address and location, and encrypt your data from hackers or trackers who may try to spy on your online activity.
    • -
    • Use a backup app: You should use a backup app on your device and create a backup of your important data before installing the APK file. This will help you restore your data in case something goes wrong during the installation process or after playing the game.
    • -
    -

    How to play Tiles Hop: EDM Rush Hack Mod APK?

    -

    Once you have downloaded and installed Tiles Hop: EDM Rush Hack Mod APK, you can play it like the original game, but with some differences and advantages. Here are some tips and tricks to enjoy playing with Tiles Hop: EDM Rush Hack Mod APK:

    -

    Differences between Tiles Hop: EDM Rush Hack Mod APK and the original game

    -

    Some of the differences that you will notice when playing with Tiles Hop: EDM Rush Hack Mod APK are:

    -
      -
    • You have unlimited money: You will start the game with a huge amount of coins and gems that you can use to buy anything you want. You can also earn more money by playing the game, but you will never run out of it.
    • -
    • You have unlocked songs: You will have access to all the songs in the game, including the ones that are normally locked or require gems or stars to unlock. You can play with any song you like, without any limitations.
    • -
    • You have no ads: You will not see any ads in the game, either before, during, or after the gameplay. You can play the game without any interruptions or distractions.
    • -
    -

    Tips and tricks to enjoy Tiles Hop: EDM Rush Hack Mod APK

    -

    Some of the tips and tricks that you can use to enjoy playing with Tiles Hop: EDM Rush Hack Mod APK are:

    -
      -
    • Try different songs and genres: With Tiles Hop: EDM Rush Hack Mod APK, you can explore a wide range of songs and genres that suit your mood and taste. You can also upload your own songs or use the online mode to play with songs from YouTube. You can discover new music and have fun with different beats and rhythms.
    • -
    • Challenge yourself and improve your skills: Even though Tiles Hop: EDM Rush Hack Mod APK gives you some advantages, you still need to tap the screen and follow the music to play the game. You can challenge yourself by playing with faster speeds, harder levels, and more obstacles. You can also improve your skills by practicing with different songs and patterns.
    • -
    • Share your results and achievements with your friends: Tiles Hop: EDM Rush Hack Mod APK allows you to share your results and achievements with your friends on social media. You can show off your high scores, unlocked songs, and customized balls. You can also invite your friends to play the game with you and compare your performances.
    • -
    -

    Conclusion

    -

    Tiles Hop: EDM Rush is a music game that lets you bounce a ball on different tiles according to the beats of your favorite songs. It is a fun and addictive game that has a variety of songs, a simple but challenging gameplay, a colorful and dynamic graphics, and a rewarding system. However, if you want to get some extra benefits, such as unlimited money, unlocked songs, and no ads, you can try Tiles Hop: EDM Rush Hack Mod APK, a modified version of the original game that gives you these advantages. However, you should also be aware of some risks that come with it, such as being unsafe, illegal, incompatible, unstable, un-fun, or unfair. Therefore, you should download and install Tiles Hop: EDM Rush Hack Mod APK at your own risk, and follow some tips to avoid malware and viruses when downloading it. You should also follow some tips and tricks to enjoy playing with Tiles Hop: EDM Rush Hack Mod APK, such as trying different songs and genres, challenging yourself and improving your skills, and sharing your results and achievements with your friends.

    -

    FAQs

    -

    Here are some frequently asked questions about Tiles Hop: EDM Rush Hack Mod APK:

    -
      -
    1. Is Tiles Hop: EDM Rush Hack Mod APK free?
    2. -

      Yes, Tiles Hop: EDM Rush Hack Mod APK is free to download and install from various websites that offer it. However, you should be careful when choosing a website to download it from, as some of them may contain malware or viruses that can harm your device or data.

      -
    3. Is Tiles Hop: EDM Rush Hack Mod APK safe?
    4. -

      No, Tiles Hop: EDM Rush Hack Mod APK is not safe to use, as it is not an official version of the game, and it is not authorized or endorsed by the developers. It may contain malware or viruses that can harm your device or data. It may also violate the terms of service and privacy policy of the game, and you may face legal consequences or get banned from the game if you use it.

      -
    5. Is Tiles Hop: EDM Rush Hack Mod APK compatible?
    6. -

      No, Tiles Hop: EDM Rush Hack Mod APK may not be compatible with your device or the latest version of the game. It may crash, freeze, or glitch during the gameplay, and you may lose your progress or data. It may also conflict with other apps or games on your device, and cause performance issues or errors.

      -
    7. Is Tiles Hop: EDM Rush Hack Mod APK fun?
    8. -

      It depends on your preference and perspective. Some people may find Tiles Hop: EDM Rush Hack Mod APK fun, as it gives them some advantages that make the game easier and more enjoyable. However, some people may find Tiles Hop: EDM Rush Hack Mod APK un-fun, as it ruins the fun and challenge of the game, and makes the game boring or unbalanced.

      -
    9. Is Tiles Hop: EDM Rush Hack Mod APK fair?
    10. -

      No, Tiles Hop: EDM Rush Hack Mod APK is not fair to use, as it gives you an unfair advantage over other players who play the game legitimately. It may also affect the ranking and leaderboard system of the game, and make the game unfair for everyone.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/64mb Video Card With Directx 9 Compatible Drivers Downloadl Where to Find and How to Use.md b/spaces/contluForse/HuggingGPT/assets/64mb Video Card With Directx 9 Compatible Drivers Downloadl Where to Find and How to Use.md deleted file mode 100644 index 5c0861a4a89d3efc28d15765d9f3add4f664faeb..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/64mb Video Card With Directx 9 Compatible Drivers Downloadl Where to Find and How to Use.md +++ /dev/null @@ -1,14 +0,0 @@ - -

    DirectX 9.0c (which includes the runtime web installer) is a selection of technologies developed by Microsoft which make running rich and immersive gaming on Windows systems possible.Most modern games require this prerequesite to be installed on your Windows system in order to function. The DirectX 9.0c package may be used to satisfy these requirements.DirectX 9.0c includes support for Pixel Shader and Vertex Shader 3.0, along with many new features across all technologies, which can be accessed by applications using DirectX.The latest version of the Windows Gaming API includes the new High-Level Shader Language which new games can take advantage of.It's already installed on Windows 7 and aboveWindows 7 comes with a newer version of DirectX but is fully compatible with all of the new features of DirectX 9.0 and above. Additionally, in order to take advantage of the features of DirectX, you must ensure that you have installed a DirectX compliant video card.If you're wanting an even newer version, DirectX 10 is also available. This version is the download of the redistributable version. That means it may be included in software packages or just simply used freely by anyone wishing to update their DirectX version on Windows XP or Windows 7 (32-bit).The package contains the installer for Windows XP, the installer for Windows 7 and is compatible with the redistributable from February 2010 and June 2010. This web installer package works with DirectX 9 graphics devices with WDDM 1.0 or higher drivers. In case you run into issues when installing this package, you may have an older video card that is not compatible with DirectX 9.0c.Please note: If you are using Windows 7 and a game or other programs requires you to have compatible video or audio driver installed, you should check to see whether there is a patch available for the game or program you have installed. In some cases, simply installing updated drivers for your video or audio card solves the problem. If updating drivers doesn't help, running the program in compatibility mode may.DirectShow accelerates video rendering hardware, and Direct3D enhances low-level graphics programmability with new programmable vertex and pixel shader 2.0 models. DirectX 9.0c includes support for Pixel Shader and Vertex Shader 3.0.The program can't start because d3dx9_35.dll is missingFinally, you can give this application a go if you experience this issue on your computer. Though if you're running Windows 8, Windows 10 or Windows 11, it's unlikely to be of much help as these files come standard with the operating system itself.Features of DirectX 9.0c

  6. 3D Audio: Supports 3D positional audio, allowing for a more immersive audio experience.
  7. DirectInput: Enables easy integration with gaming controllers and other input devices.
  8. DirectPlay: Allows for easy multiplayer game creation and management, with support for TCP/IP, IPX and modem connections.
  9. DirectShow Video Processing: Offers support for hardware accelerated video processing, allowing for faster video encoding and decoding.
  10. DirectSound 3D: Enhances the 3D audio experience with hardware acceleration and EAX environmental audio features.
  11. DirectX Media Objects: Provides a set of tools for creating streaming audio and video, with support for MPEG-2, MPEG-4 and WMV9 formats.
  12. Hardware Acceleration: Offers support for pixel shader and vertex shader 3.0, significantly increasing visual effects and graphics performance.
  13. High-Definition Display: Offers support for high-resolution displays, allowing for more detailed and realistic visuals.
  14. Multi-Adapter Support: Can work with multiple graphics adapters and cards, allowing for better performance and stability.
  15. Multi-Threading: Allows for greater performance by utilizing multiple threads of execution.
  16. Pixel Shader: Offers support for pixel shaders, allowing for more realistic lighting, shadows and special effects.
  17. Shader Model 2.0: Includes support for Shader Model 2.0, making it easier to create complex shader effects.
  18. Texture Compression: Enables the use of compressed textures, reducing the use of system memory.
  19. Vertex Shader: Supports vertex shaders, allowing for more complex 3D geometry and animations.
  20. Video Acceleration: Enhances video playback, allowing for smoother streaming and faster loading times.
  21. Compatibility and LicenseDirectX 9.0c is provided under a freeware license on Windows from components with no restrictions on usage. Download and installation of this PC software is free and 9.0c is the latest version last time we checked.

    -

    64mb Video Card With Directx 9 Compatible Drivers Downloadl


    Download File ✯✯✯ https://ssurll.com/2uzwpi



    -

    DirectX 9 works with a PC's graphics card to enhance graphics and sound when running games, videos and programs containing these elements. The software component is free from Microsoft and required by many programs, especially ones containing graphics, 3D animation and advanced sound elements. If your graphics card is not compatible with DirectX 9, you will not be able to run programs that call for it. The software component contains a built-in diagnostic tool that warns you if your graphics card is not compatible.

    -

    Look under the "Notes" area. If you see "No problems found" or "Certified" listed, your graphics card is compatible with DirectX 9. If any errors stating the graphics card is not found or a conflict has occurred, your graphics card is not compatible with DirectX 9.

    -

    1GHz Intel Pentium III or AMD Athlon Processor
    256MB of RAM
    8x DVD-ROM Drive
    64MB Video Card with DirectX 9 compatible drivers ("GeForce3" or better)
    DirectX 9 compatible Stereo Sound Card
    Keyboard & Mouse

    -

    -

    Test SystemRecommended System SpecificationsAthlon 64 FX-55 @ 3GHz
    2GB Crucial Ballistix
    DFI LanParty UT NF4 SLI-DR
    2x 7800GTX 512 SLI/2x 6800 Ultra SLI
    Audigy 2 ZS Platinum Pro
    Logitech Z68064mb DirectX 9 compatible graphics card, Pentium IV or Athlon 1.4ghz CPU,
    512mb RAM,
    Windows 2000/XP, DirectX 9 compatible sound card.

    An introduction

    It is often the case that the small things in life are the most satisfying. For me, nothing beats a nice warm cup of tea at the end of a hard day for satisfaction. However, developer Infinity Ward obviously doesn’t believe in small things, but rather aims towards grand scale of epic proportions. The first Call of Duty did this for the most part with some of the most immense battles and a scale and level of intensity never before witnessed on a PC. This has helped set the Call of Duty series above the rest of the crowded World War II genre. Now, Infinity Ward have released their sequel. Something much bigger, more refined and hopefully much better, making my favoured cup of tea seem like a small pleasure than a real satisfaction, although there are some similarities, as CODII does have a tendency to make you feel warm and glow inside… once you’ve dodged the odd flying grenade and the constant barrage of bullets that is…

    -

    The GeForce FX line moved to PCI Express in early 2004 with a number of models, including the PCX 5300, PCX 5750, PCX 5900, and PCX 5950. These cards were largely the same as their AGP predecessors with similar model numbers. To operate on the PCIe bus, an AGP-to-PCIe "HSI bridge" chip on the video card converted the PCIe signals into AGP signals for the GPU.[18]

    -

    i have a regular macbook from 2010 not a pro just the white basic macbook i have the NVIDIA GeForce 320M as in the screen shot i know its not mentioned but seeing it in the screen shot makes me wonder is my computer or more specifically my video card effective enough to run the game even if on the lowest settings im all for graphics but can do without i enjoy Nintendo 64 just as much as i do games that look photo realistic. please help i have looked everywhere to find out but with no luck

    -

    AGP has long since been abandoned by video card makers, and that means poor driver support. Meanwhile, users here are reporting success with modern PCI cards in their old AGP machines, so that's what I recommend. I have an old Dimension 2350 (no AGP), made about the same time as your GX240, and it is working quite nicely with a Sparkle 8400GS PCI.

    -







    Directx 7.0 Software Free Download --









































































































    In this article, we'll turn our attention to DirectDraw, which we'll use to insert (or blit in the official graphic developer lingo) a bitmap onto a form, as seen in Figure AWhen you pass an empty string ("") DirectX creates a DirectDraw object that uses the active display driverFree to try Publisher: Cybersofts Studio Downloads: 7,016 Results 1 - 10 of 213 Prev 1 2 3 4 5 Next In addition, DirectX can run multimedia features that the system itself doesn't supportRight-click on the default form, and select View Code from the shortcut menuAll rights reservedTo accomplish this step, we'll use the DirectDrawSurface7 object's blit method, which takes four argumentsNavigation open search Close PLATFORMS POPULAR LINKS Latest News Security and Antivirus Center New Releases User Favorites Editor's Picks Top Freeware CATEGORIES Browsers Business Software Communications Digital Photo Software Entertainment Software Games Internet Software MP3 & Audio Software Productivity Software Screensavers & Wallpaper Security Software Utilities & Operating Systems Video Software HELP & SETTINGS Link to CNET Site Submit Feedback Terms of Use Privacy Policy 2017 CBS Interactive IncFor example, a DirectX application that uses 3-D imagery will still run on a machine that doesn't have a 3-D acceleration card, albeit slow, because DirectX simulates the services of a 3-D cardFree Editors' rating User rating Publisher: Google Downloads: 172,622 Microsoft DirectX Redistributable (June 2010) Free Microsoft DirectX Redistributable (June 2010) Boost performance in graphics and sound when playing games or watching video on your PCthat Minimum specs for your PC are Windows Vista/7, 4GB of RAM, and a DirectX 9 0c compatible NVIDIA or AMD ATI video card with at least 1 GB of RAM To create one or more surfaces, instantiate a DirectDrawSurface7 object, and to instantiate a clip area use the DirectDrawClipper objectWere sorryReproduction in whole or in part in any form or medium without express written permission of Element K Content LLC is prohibitedTo insert text onto our example surface, add: Dim myfont As New StdFont to the general declarations sectionThis article may contain URLs that were valid when originally published, but now link to sites or pages that no longer existShowing results for directx 7.0 as the words pc, software, download are considered too common Search instead for pc directx 7.0 software download Bandicam An advanced game recording software and screen recorderDirectX 11, the 14,936 137 2.9 Download NVIDIA Direct3D SDK Free NVIDIA Direct3D SDK teaches how to make the most of the latest GeForce GPUsto function at optimum levels The hardware graphic acceleration needs a 9 0c DirectX graphics card, with an excess of 64 MB video memory The user's Setting the lflags member to DDSDCAPS simply indicates that the following ddscaps is valid in this typeFree User rating Publisher: Microsoft Downloads: 151,508 DirectX Rollbacker Free to try DirectX Rollbacker Manage, backup, and restore Microsoft DirectXDirectDraw's objects In order to use DirectDraw, you first create an instance of the DirectDraw object, which represents a single display adapter on the computerChances are you don't really want the background image to cover the entire form like thisThe last argument of the method, DDBLTWAIT, tells DirectDraw to wait if the blitter is busy and blit the surface when it becomes availableFree User rating Publisher: NVIDIA Downloads: 1,597,834 nVidia Graphics Driver (Windows Vista 64-bit / Windows 7 64-bit / Windows 8 64-bit) Free nVidia Graphics Driver (Windows Vista 64-bit / Windows 7 64-Download directx 7.0 for windows (40 programs) License: All All Free Platform: Windows iPhone Android Windows Phone BlackBerry Windows Mac Web Apps OS: Windows 7 All Windows XP Windows Vista Windows 7 Windows 8.1 Windows 8 Windows 10 Advertisement Advertisement DirectX End-User Runtime Web Installer 9.29.1974 License Free Download Language English Platform windows Essential component for PC gaming Free Editors' rating User rating Publisher: NVIDIA Downloads: 614,268 Results 1 - 10 of 276 Prev 1 2 3 4 5 Next e819e6cdb0

    jason upton glimpse download gratis
    aidonia hundred stab free download
    download video proses terjadinya petir
    kimi ni todoke season 1 opening song mp3 download
    fire emblem path of radiance rom download for dolphin
    research download failed uart send error
    winscp command line download all files
    she's dating a gangster too free download
    black paradise beast download mp3
    marathi movies 2013 free download youtube

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/mlflow.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/mlflow.py deleted file mode 100644 index f9a72592be47b534ce22573775fd5a7e8e86d72d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/hooks/logger/mlflow.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class MlflowLoggerHook(LoggerHook): - - def __init__(self, - exp_name=None, - tags=None, - log_model=True, - interval=10, - ignore_last=True, - reset_flag=False, - by_epoch=True): - """Class to log metrics and (optionally) a trained model to MLflow. - - It requires `MLflow`_ to be installed. - - Args: - exp_name (str, optional): Name of the experiment to be used. - Default None. - If not None, set the active experiment. - If experiment does not exist, an experiment with provided name - will be created. - tags (dict of str: str, optional): Tags for the current run. - Default None. - If not None, set tags for the current run. - log_model (bool, optional): Whether to log an MLflow artifact. - Default True. - If True, log runner.model as an MLflow artifact - for the current run. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _MLflow: - https://www.mlflow.org/docs/latest/index.html - """ - super(MlflowLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_mlflow() - self.exp_name = exp_name - self.tags = tags - self.log_model = log_model - - def import_mlflow(self): - try: - import mlflow - import mlflow.pytorch as mlflow_pytorch - except ImportError: - raise ImportError( - 'Please run "pip install mlflow" to install mlflow') - self.mlflow = mlflow - self.mlflow_pytorch = mlflow_pytorch - - @master_only - def before_run(self, runner): - super(MlflowLoggerHook, self).before_run(runner) - if self.exp_name is not None: - self.mlflow.set_experiment(self.exp_name) - if self.tags is not None: - self.mlflow.set_tags(self.tags) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - self.mlflow.log_metrics(tags, step=self.get_iter(runner)) - - @master_only - def after_run(self, runner): - if self.log_model: - self.mlflow_pytorch.log_model(runner.model, 'models') diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/ops/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/ops/__init__.py deleted file mode 100644 index bec51c75b9363a9a19e9fb5c35f4e7dbd6f7751c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/ops/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .encoding import Encoding -from .wrappers import Upsample, resize - -__all__ = ['Upsample', 'resize', 'Encoding'] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/misc.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/cozyanduofen/bingo/src/components/ui/badge.tsx b/spaces/cozyanduofen/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
    - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/curt-tigges/anime-image-labeller/README.md b/spaces/curt-tigges/anime-image-labeller/README.md deleted file mode 100644 index 37e377e5cb0b2e0e8594d0d2b3ae7b82a55f84c6..0000000000000000000000000000000000000000 --- a/spaces/curt-tigges/anime-image-labeller/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Anime Image Labeller -emoji: 🚀 -colorFrom: red -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/daarumadx/bot/src/main.py b/spaces/daarumadx/bot/src/main.py deleted file mode 100644 index 7c8c59116b84e8d3f3e4f2c5de85cb4359610f42..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/main.py +++ /dev/null @@ -1,85 +0,0 @@ -"""main logic.""" -from multiprocessing import freeze_support -freeze_support() - -import os -import sys -import time - -import argv -from config import Config as Conf -from processing import SimpleProcessing -from processing.folder import FolderImageProcessing -from processing.multiple import MultipleImageProcessing - -def main(_): - """Start main logic.""" - Conf.log.info("Welcome to DreamPower") - - if Conf.args['gpu_ids']: - Conf.log.info("GAN Processing Will Use GPU IDs: {}".format(Conf.args['gpu_ids'])) - else: - Conf.log.info("GAN Processing Will Use CPU") - - # Processing - start = time.time() - select_processing().run() - Conf.log.success("Done! We have taken {} seconds".format(round(time.time() - start, 2))) - - # Exit - sys.exit() - - -def select_processing(): - """ - Select the processing to use following args parameters. - - :return: a process to run - """ - if Conf.args['image_size'] and Conf.args['image_size'] >= 256: - Conf.set_image_size(Conf.args['image_size']) - - if os.path.isdir(Conf.args['input']): - process = processing_image_folder() - elif Conf.args['n_runs'] != 1: - process = multiple_image_processing() - else: - process = simple_image_processing() - Conf.log.debug("Process to execute : {}".format(process)) - return process - - -def simple_image_processing(): - """ - Define a simple image process ready to run. - - :param phases: list of image transformation - :return: a image process run ready - """ - return SimpleProcessing() - - -def multiple_image_processing(): - """ - Define a multiple image process ready to run. - - :param n_runs: number of times to process - :return: a multiple image process run ready - """ - filename, extension = os.path.splitext(Conf.args['output']) - Conf.args['input'] = [Conf.args['input'] for _ in range(Conf.args['n_runs'])] - Conf.args['output'] = ["{}{}{}".format(filename, i, extension) for i in range(Conf.args['n_runs'])] - return MultipleImageProcessing() - - -def processing_image_folder(): - """ - Define a folder image process ready to run. - - :return: a image process run ready - """ - return FolderImageProcessing() - - -if __name__ == "__main__": - argv.run() diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/data/image_folder.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/data/image_folder.py deleted file mode 100644 index efadc2ecbe2fb4b53b78230aba25ec505eff0e55..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/data/image_folder.py +++ /dev/null @@ -1,66 +0,0 @@ -"""A modified image folder class - -We modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py) -so that this class can load images from both current directory and its subdirectories. -""" -import numpy as np -import torch.utils.data as data - -from PIL import Image -import os -import os.path - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', - '.tif', '.TIF', '.tiff', '.TIFF', -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(dir, max_dataset_size=float("inf")): - images = [] - assert os.path.isdir(dir) or os.path.islink(dir), '%s is not a valid directory' % dir - - for root, _, fnames in sorted(os.walk(dir, followlinks=True)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - images.append(path) - return images[:min(max_dataset_size, len(images))] - - -def default_loader(path): - return Image.open(path).convert('RGB') - - -class ImageFolder(data.Dataset): - - def __init__(self, root, transform=None, return_paths=False, - loader=default_loader): - imgs = make_dataset(root) - if len(imgs) == 0: - raise(RuntimeError("Found 0 images in: " + root + "\n" - "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) - - self.root = root - self.imgs = imgs - self.transform = transform - self.return_paths = return_paths - self.loader = loader - - def __getitem__(self, index): - path = self.imgs[index] - img = self.loader(path) - if self.transform is not None: - img = self.transform(img) - if self.return_paths: - return img, path - else: - return img - - def __len__(self): - return len(self.imgs) diff --git a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpge.cpp b/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functions/test_project/cpp/libJPG/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/davda54/chat-nort5/norquad/modeling_norbert.py b/spaces/davda54/chat-nort5/norquad/modeling_norbert.py deleted file mode 100644 index c802871db6175aa9560eb1c72a732c0297e5751f..0000000000000000000000000000000000000000 --- a/spaces/davda54/chat-nort5/norquad/modeling_norbert.py +++ /dev/null @@ -1,657 +0,0 @@ -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils import checkpoint - -from configuration_norbert import NorbertConfig -from transformers.modeling_utils import PreTrainedModel -from transformers.activations import gelu_new -from transformers.modeling_outputs import ( - MaskedLMOutput, - MultipleChoiceModelOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, - BaseModelOutput -) -from transformers.pytorch_utils import softmax_backward_data - - -class Encoder(nn.Module): - def __init__(self, config, activation_checkpointing=False): - super().__init__() - self.layers = nn.ModuleList([EncoderLayer(config) for _ in range(config.num_hidden_layers)]) - - for i, layer in enumerate(self.layers): - layer.mlp.mlp[1].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - layer.mlp.mlp[-2].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i))) - - self.activation_checkpointing = activation_checkpointing - - def forward(self, hidden_states, attention_mask, relative_embedding): - hidden_states, attention_probs = [hidden_states], [] - - for layer in self.layers: - if self.activation_checkpointing: - hidden_state, attention_p = checkpoint.checkpoint(layer, hidden_states[-1], attention_mask, relative_embedding) - else: - hidden_state, attention_p = layer(hidden_states[-1], attention_mask, relative_embedding) - - hidden_states.append(hidden_state) - attention_probs.append(attention_p) - - return hidden_states, attention_probs - - -class MaskClassifier(nn.Module): - def __init__(self, config, subword_embedding): - super().__init__() - self.nonlinearity = nn.Sequential( - nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.hidden_size, config.hidden_size), - nn.GELU(), - nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False), - nn.Dropout(config.hidden_dropout_prob), - nn.Linear(subword_embedding.size(1), subword_embedding.size(0)) - ) - self.initialize(config.hidden_size, subword_embedding) - - def initialize(self, hidden_size, embedding): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.nonlinearity[1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - self.nonlinearity[-1].weight = embedding - self.nonlinearity[1].bias.data.zero_() - self.nonlinearity[-1].bias.data.zero_() - - def forward(self, x, masked_lm_labels=None): - if masked_lm_labels is not None: - x = torch.index_select(x.flatten(0, 1), 0, torch.nonzero(masked_lm_labels.flatten() != -100).squeeze()) - x = self.nonlinearity(x) - return x - - -class EncoderLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.attention = Attention(config) - self.mlp = FeedForward(config) - - def forward(self, x, padding_mask, relative_embedding): - attention_output, attention_probs = self.attention(x, padding_mask, relative_embedding) - x = x + attention_output - x = x + self.mlp(x) - return x, attention_probs - - -class GeGLU(nn.Module): - def forward(self, x): - x, gate = x.chunk(2, dim=-1) - x = x * gelu_new(gate) - return x - - -class FeedForward(nn.Module): - def __init__(self, config): - super().__init__() - self.mlp = nn.Sequential( - nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.hidden_size, 2*config.intermediate_size, bias=False), - GeGLU(), - nn.LayerNorm(config.intermediate_size, eps=config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.intermediate_size, config.hidden_size, bias=False), - nn.Dropout(config.hidden_dropout_prob) - ) - self.initialize(config.hidden_size) - - def initialize(self, hidden_size): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.mlp[1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.mlp[-2].weight, mean=0.0, std=std, a=-2*std, b=2*std) - - def forward(self, x): - return self.mlp(x) - - -class MaskedSoftmax(torch.autograd.Function): - @staticmethod - def forward(self, x, mask, dim): - self.dim = dim - x.masked_fill_(mask, float('-inf')) - x = torch.softmax(x, self.dim) - x.masked_fill_(mask, 0.0) - self.save_for_backward(x) - return x - - @staticmethod - def backward(self, grad_output): - output, = self.saved_tensors - input_grad = softmax_backward_data(self, grad_output, output, self.dim, output) - return input_grad, None, None - - -class Attention(nn.Module): - def __init__(self, config): - super().__init__() - - self.config = config - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError(f"The hidden size {config.hidden_size} is not a multiple of the number of attention heads {config.num_attention_heads}") - - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_size = config.hidden_size // config.num_attention_heads - - self.in_proj_qk = nn.Linear(config.hidden_size, 2*config.hidden_size, bias=True) - self.in_proj_v = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - self.out_proj = nn.Linear(config.hidden_size, config.hidden_size, bias=True) - - self.pre_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False) - self.post_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=True) - - position_indices = torch.arange(config.max_position_embeddings, dtype=torch.long).unsqueeze(1) \ - - torch.arange(config.max_position_embeddings, dtype=torch.long).unsqueeze(0) - position_indices = self.make_log_bucket_position(position_indices, config.position_bucket_size, config.max_position_embeddings) - position_indices = config.position_bucket_size - 1 + position_indices - self.register_buffer("position_indices", position_indices, persistent=True) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.scale = 1.0 / math.sqrt(3 * self.head_size) - self.initialize() - - def make_log_bucket_position(self, relative_pos, bucket_size, max_position): - sign = torch.sign(relative_pos) - mid = bucket_size // 2 - abs_pos = torch.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, torch.abs(relative_pos).clamp(max=max_position - 1)) - log_pos = torch.ceil(torch.log(abs_pos / mid) / math.log((max_position-1) / mid) * (mid - 1)).int() + mid - bucket_pos = torch.where(abs_pos <= mid, relative_pos, log_pos * sign).long() - return bucket_pos - - def initialize(self): - std = math.sqrt(2.0 / (5.0 * self.hidden_size)) - nn.init.trunc_normal_(self.in_proj_qk.weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.in_proj_v.weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.out_proj.weight, mean=0.0, std=std, a=-2*std, b=2*std) - self.in_proj_qk.bias.data.zero_() - self.in_proj_v.bias.data.zero_() - self.out_proj.bias.data.zero_() - - def compute_attention_scores(self, hidden_states, relative_embedding): - key_len, batch_size, _ = hidden_states.size() - query_len = key_len - - if self.position_indices.size(0) < query_len: - position_indices = torch.arange(query_len, dtype=torch.long).unsqueeze(1) \ - - torch.arange(query_len, dtype=torch.long).unsqueeze(0) - position_indices = self.make_log_bucket_position(position_indices, self.position_bucket_size, 512) - position_indices = self.position_bucket_size - 1 + position_indices - self.position_indices = position_indices.to(hidden_states.device) - - hidden_states = self.pre_layer_norm(hidden_states) - - query, key = self.in_proj_qk(hidden_states).chunk(2, dim=2) # shape: [T, B, D] - value = self.in_proj_v(hidden_states) # shape: [T, B, D] - - query = query.reshape(query_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) - key = key.reshape(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) - value = value.view(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) - - attention_scores = torch.bmm(query, key.transpose(1, 2) * self.scale) - - pos = self.in_proj_qk(self.dropout(relative_embedding)) # shape: [2T-1, 2D] - query_pos, key_pos = pos.view(-1, self.num_heads, 2*self.head_size).chunk(2, dim=2) - query = query.view(batch_size, self.num_heads, query_len, self.head_size) - key = key.view(batch_size, self.num_heads, query_len, self.head_size) - - attention_c_p = torch.einsum("bhqd,khd->bhqk", query, key_pos.squeeze(1) * self.scale) - attention_p_c = torch.einsum("bhkd,qhd->bhqk", key * self.scale, query_pos.squeeze(1)) - - position_indices = self.position_indices[:query_len, :key_len].expand(batch_size, self.num_heads, -1, -1) - attention_c_p = attention_c_p.gather(3, position_indices) - attention_p_c = attention_p_c.gather(2, position_indices) - - attention_scores = attention_scores.view(batch_size, self.num_heads, query_len, key_len) - attention_scores.add_(attention_c_p) - attention_scores.add_(attention_p_c) - - return attention_scores, value - - def compute_output(self, attention_probs, value): - attention_probs = self.dropout(attention_probs) - context = torch.bmm(attention_probs.flatten(0, 1), value) # shape: [B*H, Q, D] - context = context.transpose(0, 1).reshape(context.size(1), -1, self.hidden_size) # shape: [Q, B, H*D] - context = self.out_proj(context) - context = self.post_layer_norm(context) - context = self.dropout(context) - return context - - def forward(self, hidden_states, attention_mask, relative_embedding): - attention_scores, value = self.compute_attention_scores(hidden_states, relative_embedding) - attention_probs = MaskedSoftmax.apply(attention_scores, attention_mask, -1) - return self.compute_output(attention_probs, value), attention_probs.detach() - - -class Embedding(nn.Module): - def __init__(self, config): - super().__init__() - self.hidden_size = config.hidden_size - - self.word_embedding = nn.Embedding(config.vocab_size, config.hidden_size) - self.word_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - self.relative_embedding = nn.Parameter(torch.empty(2 * config.position_bucket_size - 1, config.hidden_size)) - self.relative_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - self.initialize() - - def initialize(self): - std = math.sqrt(2.0 / (5.0 * self.hidden_size)) - nn.init.trunc_normal_(self.relative_embedding, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.word_embedding.weight, mean=0.0, std=std, a=-2*std, b=2*std) - - def forward(self, input_ids): - word_embedding = self.dropout(self.word_layer_norm(self.word_embedding(input_ids))) - relative_embeddings = self.relative_layer_norm(self.relative_embedding) - return word_embedding, relative_embeddings - - -# -# HuggingFace wrappers -# - -class NorbertPreTrainedModel(PreTrainedModel): - config_class = NorbertConfig - base_model_prefix = "norbert3" - supports_gradient_checkpointing = True - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, Encoder): - module.activation_checkpointing = value - - def _init_weights(self, module): - pass # everything is already initialized - - -class NorbertModel(NorbertPreTrainedModel): - def __init__(self, config, add_mlm_layer=False): - super().__init__(config) - self.config = config - - self.embedding = Embedding(config) - self.transformer = Encoder(config, activation_checkpointing=False) - self.classifier = MaskClassifier(config, self.embedding.word_embedding.weight) if add_mlm_layer else None - - def get_input_embeddings(self): - return self.embedding.word_embedding - - def set_input_embeddings(self, value): - self.embedding.word_embedding = value - - def get_contextualized_embeddings( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None - ) -> List[torch.Tensor]: - if input_ids is not None: - input_shape = input_ids.size() - else: - raise ValueError("You have to specify input_ids") - - batch_size, seq_length = input_shape - device = input_ids.device - - if attention_mask is None: - attention_mask = torch.zeros(batch_size, seq_length, dtype=torch.bool, device=device) - else: - attention_mask = ~attention_mask.bool() - attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) - - static_embeddings, relative_embedding = self.embedding(input_ids.t()) - contextualized_embeddings, attention_probs = self.transformer(static_embeddings, attention_mask, relative_embedding) - contextualized_embeddings = [e.transpose(0, 1) for e in contextualized_embeddings] - last_layer = contextualized_embeddings[-1] - contextualized_embeddings = [contextualized_embeddings[0]] + [ - contextualized_embeddings[i] - contextualized_embeddings[i - 1] - for i in range(1, len(contextualized_embeddings)) - ] - return last_layer, contextualized_embeddings, attention_probs - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BaseModelOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(input_ids, attention_mask) - - if not return_dict: - return ( - sequence_output, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - - return BaseModelOutput( - last_hidden_state=sequence_output, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - -class NorbertForMaskedLM(NorbertModel): - _keys_to_ignore_on_load_unexpected = ["head"] - - def __init__(self, config): - super().__init__(config, add_mlm_layer=True) - - def get_output_embeddings(self): - return self.classifier.nonlinearity[-1].weight - - def set_output_embeddings(self, new_embeddings): - self.classifier.nonlinearity[-1].weight = new_embeddings - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - output_attentions: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - ) -> Union[Tuple[torch.Tensor], MaskedLMOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(input_ids, attention_mask) - subword_prediction = self.classifier(sequence_output) - subword_prediction[:, :, :106+1] = float("-inf") - - masked_lm_loss = None - if labels is not None: - masked_lm_loss = F.cross_entropy(subword_prediction.flatten(0, 1), labels.flatten()) - - if not return_dict: - output = ( - subword_prediction, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=subword_prediction, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - -class Classifier(nn.Module): - def __init__(self, config, num_labels: int): - super().__init__() - - drop_out = getattr(config, "cls_dropout", None) - drop_out = config.hidden_dropout_prob if drop_out is None else drop_out - - self.nonlinearity = nn.Sequential( - nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False), - nn.Linear(config.hidden_size, config.hidden_size), - nn.GELU(), - nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False), - nn.Dropout(drop_out), - nn.Linear(config.hidden_size, num_labels) - ) - self.initialize(config.hidden_size) - - def initialize(self, hidden_size): - std = math.sqrt(2.0 / (5.0 * hidden_size)) - nn.init.trunc_normal_(self.nonlinearity[1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - nn.init.trunc_normal_(self.nonlinearity[-1].weight, mean=0.0, std=std, a=-2*std, b=2*std) - self.nonlinearity[1].bias.data.zero_() - self.nonlinearity[-1].bias.data.zero_() - - def forward(self, x): - x = self.nonlinearity(x) - return x - - -class NorbertForSequenceClassification(NorbertModel): - _keys_to_ignore_on_load_unexpected = ["classifier"] - _keys_to_ignore_on_load_missing = ["head"] - - def __init__(self, config): - super().__init__(config, add_mlm_layer=False) - - self.num_labels = config.num_labels - self.head = Classifier(config, self.num_labels) - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(input_ids, attention_mask) - logits = self.head(sequence_output[:, 0, :]) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = nn.MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = nn.CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = nn.BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = ( - logits, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - -class NorbertForTokenClassification(NorbertModel): - _keys_to_ignore_on_load_unexpected = ["classifier"] - _keys_to_ignore_on_load_missing = ["head"] - - def __init__(self, config): - super().__init__(config, add_mlm_layer=False) - - self.num_labels = config.num_labels - self.head = Classifier(config, self.num_labels) - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(input_ids, attention_mask) - logits = self.head(sequence_output) - - loss = None - if labels is not None: - loss_fct = nn.CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = ( - logits, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - -class NorbertForQuestionAnswering(NorbertModel): - _keys_to_ignore_on_load_unexpected = ["classifier"] - _keys_to_ignore_on_load_missing = ["head"] - - def __init__(self, config): - super().__init__(config, add_mlm_layer=False) - - self.num_labels = config.num_labels - self.head = Classifier(config, self.num_labels) - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - start_positions: Optional[torch.Tensor] = None, - end_positions: Optional[torch.Tensor] = None - ) -> Union[Tuple[torch.Tensor], QuestionAnsweringModelOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(input_ids, attention_mask) - logits = self.head(sequence_output) - - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = nn.CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = ( - start_logits, - end_logits, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) - - -class NorbertForMultipleChoice(NorbertModel): - _keys_to_ignore_on_load_unexpected = ["classifier"] - _keys_to_ignore_on_load_missing = ["head"] - - def __init__(self, config): - super().__init__(config, add_mlm_layer=False) - - self.num_labels = getattr(config, "num_labels", 2) - self.head = Classifier(config, self.num_labels) - - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None - ) -> Union[Tuple[torch.Tensor], MultipleChoiceModelOutput]: - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] - - flat_input_ids = input_ids.view(-1, input_ids.size(-1)) - flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - - sequence_output, contextualized_embeddings, attention_probs = self.get_contextualized_embeddings(flat_input_ids, flat_attention_mask) - logits = self.head(sequence_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = nn.CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = ( - reshaped_logits, - *([contextualized_embeddings] if output_hidden_states else []), - *([attention_probs] if output_attentions else []) - ) - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=contextualized_embeddings if output_hidden_states else None, - attentions=attention_probs if output_attentions else None - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/blockquote.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/blockquote.py deleted file mode 100644 index 0c9081b9cbd4b49d39d75427fd806e56c485a5fd..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/blockquote.py +++ /dev/null @@ -1,299 +0,0 @@ -# Block quotes -from __future__ import annotations - -import logging - -from ..common.utils import isStrSpace -from .state_block import StateBlock - -LOGGER = logging.getLogger(__name__) - - -def blockquote(state: StateBlock, startLine: int, endLine: int, silent: bool) -> bool: - LOGGER.debug( - "entering blockquote: %s, %s, %s, %s", state, startLine, endLine, silent - ) - - oldLineMax = state.lineMax - pos = state.bMarks[startLine] + state.tShift[startLine] - max = state.eMarks[startLine] - - if state.is_code_block(startLine): - return False - - # check the block quote marker - try: - if state.src[pos] != ">": - return False - except IndexError: - return False - pos += 1 - - # we know that it's going to be a valid blockquote, - # so no point trying to find the end of it in silent mode - if silent: - return True - - # set offset past spaces and ">" - initial = offset = state.sCount[startLine] + 1 - - try: - second_char: str | None = state.src[pos] - except IndexError: - second_char = None - - # skip one optional space after '>' - if second_char == " ": - # ' > test ' - # ^ -- position start of line here: - pos += 1 - initial += 1 - offset += 1 - adjustTab = False - spaceAfterMarker = True - elif second_char == "\t": - spaceAfterMarker = True - - if (state.bsCount[startLine] + offset) % 4 == 3: - # ' >\t test ' - # ^ -- position start of line here (tab has width==1) - pos += 1 - initial += 1 - offset += 1 - adjustTab = False - else: - # ' >\t test ' - # ^ -- position start of line here + shift bsCount slightly - # to make extra space appear - adjustTab = True - - else: - spaceAfterMarker = False - - oldBMarks = [state.bMarks[startLine]] - state.bMarks[startLine] = pos - - while pos < max: - ch = state.src[pos] - - if isStrSpace(ch): - if ch == "\t": - offset += ( - 4 - - (offset + state.bsCount[startLine] + (1 if adjustTab else 0)) % 4 - ) - else: - offset += 1 - - else: - break - - pos += 1 - - oldBSCount = [state.bsCount[startLine]] - state.bsCount[startLine] = ( - state.sCount[startLine] + 1 + (1 if spaceAfterMarker else 0) - ) - - lastLineEmpty = pos >= max - - oldSCount = [state.sCount[startLine]] - state.sCount[startLine] = offset - initial - - oldTShift = [state.tShift[startLine]] - state.tShift[startLine] = pos - state.bMarks[startLine] - - terminatorRules = state.md.block.ruler.getRules("blockquote") - - oldParentType = state.parentType - state.parentType = "blockquote" - - # Search the end of the block - # - # Block ends with either: - # 1. an empty line outside: - # ``` - # > test - # - # ``` - # 2. an empty line inside: - # ``` - # > - # test - # ``` - # 3. another tag: - # ``` - # > test - # - - - - # ``` - - # for (nextLine = startLine + 1; nextLine < endLine; nextLine++) { - nextLine = startLine + 1 - while nextLine < endLine: - # check if it's outdented, i.e. it's inside list item and indented - # less than said list item: - # - # ``` - # 1. anything - # > current blockquote - # 2. checking this line - # ``` - isOutdented = state.sCount[nextLine] < state.blkIndent - - pos = state.bMarks[nextLine] + state.tShift[nextLine] - max = state.eMarks[nextLine] - - if pos >= max: - # Case 1: line is not inside the blockquote, and this line is empty. - break - - evaluatesTrue = state.src[pos] == ">" and not isOutdented - pos += 1 - if evaluatesTrue: - # This line is inside the blockquote. - - # set offset past spaces and ">" - initial = offset = state.sCount[nextLine] + 1 - - try: - next_char: str | None = state.src[pos] - except IndexError: - next_char = None - - # skip one optional space after '>' - if next_char == " ": - # ' > test ' - # ^ -- position start of line here: - pos += 1 - initial += 1 - offset += 1 - adjustTab = False - spaceAfterMarker = True - elif next_char == "\t": - spaceAfterMarker = True - - if (state.bsCount[nextLine] + offset) % 4 == 3: - # ' >\t test ' - # ^ -- position start of line here (tab has width==1) - pos += 1 - initial += 1 - offset += 1 - adjustTab = False - else: - # ' >\t test ' - # ^ -- position start of line here + shift bsCount slightly - # to make extra space appear - adjustTab = True - - else: - spaceAfterMarker = False - - oldBMarks.append(state.bMarks[nextLine]) - state.bMarks[nextLine] = pos - - while pos < max: - ch = state.src[pos] - - if isStrSpace(ch): - if ch == "\t": - offset += ( - 4 - - ( - offset - + state.bsCount[nextLine] - + (1 if adjustTab else 0) - ) - % 4 - ) - else: - offset += 1 - else: - break - - pos += 1 - - lastLineEmpty = pos >= max - - oldBSCount.append(state.bsCount[nextLine]) - state.bsCount[nextLine] = ( - state.sCount[nextLine] + 1 + (1 if spaceAfterMarker else 0) - ) - - oldSCount.append(state.sCount[nextLine]) - state.sCount[nextLine] = offset - initial - - oldTShift.append(state.tShift[nextLine]) - state.tShift[nextLine] = pos - state.bMarks[nextLine] - - nextLine += 1 - continue - - # Case 2: line is not inside the blockquote, and the last line was empty. - if lastLineEmpty: - break - - # Case 3: another tag found. - terminate = False - - for terminatorRule in terminatorRules: - if terminatorRule(state, nextLine, endLine, True): - terminate = True - break - - if terminate: - # Quirk to enforce "hard termination mode" for paragraphs; - # normally if you call `tokenize(state, startLine, nextLine)`, - # paragraphs will look below nextLine for paragraph continuation, - # but if blockquote is terminated by another tag, they shouldn't - state.lineMax = nextLine - - if state.blkIndent != 0: - # state.blkIndent was non-zero, we now set it to zero, - # so we need to re-calculate all offsets to appear as - # if indent wasn't changed - oldBMarks.append(state.bMarks[nextLine]) - oldBSCount.append(state.bsCount[nextLine]) - oldTShift.append(state.tShift[nextLine]) - oldSCount.append(state.sCount[nextLine]) - state.sCount[nextLine] -= state.blkIndent - - break - - oldBMarks.append(state.bMarks[nextLine]) - oldBSCount.append(state.bsCount[nextLine]) - oldTShift.append(state.tShift[nextLine]) - oldSCount.append(state.sCount[nextLine]) - - # A negative indentation means that this is a paragraph continuation - # - state.sCount[nextLine] = -1 - - nextLine += 1 - - oldIndent = state.blkIndent - state.blkIndent = 0 - - token = state.push("blockquote_open", "blockquote", 1) - token.markup = ">" - token.map = lines = [startLine, 0] - - state.md.block.tokenize(state, startLine, nextLine) - - token = state.push("blockquote_close", "blockquote", -1) - token.markup = ">" - - state.lineMax = oldLineMax - state.parentType = oldParentType - lines[1] = state.line - - # Restore original tShift; this might not be necessary since the parser - # has already been here, but just to make sure we can do that. - for i, item in enumerate(oldTShift): - state.bMarks[i + startLine] = oldBMarks[i] - state.tShift[i + startLine] = item - state.sCount[i + startLine] = oldSCount[i] - state.bsCount[i + startLine] = oldBSCount[i] - - state.blkIndent = oldIndent - - return True diff --git a/spaces/dcq/freegpt-webui/client/css/stop-generating.css b/spaces/dcq/freegpt-webui/client/css/stop-generating.css deleted file mode 100644 index 3c2010d25065fbef63b104df743ef72c00259871..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/css/stop-generating.css +++ /dev/null @@ -1,38 +0,0 @@ -.stop-generating { - position: absolute; - bottom: 128px; - left: 50%; - transform: translateX(-50%); - z-index: 1000000; -} - -.stop-generating button { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - color: var(--colour-3); - cursor: pointer; - animation: show_popup 0.4s; -} - -@keyframes show_popup { - from { - opacity: 0; - transform: translateY(10px); - } -} - -@keyframes hide_popup { - to { - opacity: 0; - transform: translateY(10px); - } -} - -.stop-generating-hiding button { - animation: hide_popup 0.4s; -} - -.stop-generating-hidden button { - display: none; -} diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/pipeline_ddim.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/pipeline_ddim.py deleted file mode 100644 index 0e7f2258fa999cc4cdd999a63c287f38eb7ac9a6..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/pipeline_ddim.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import List, Optional, Tuple, Union - -import torch - -from ...schedulers import DDIMScheduler -from ...utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -class DDIMPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of - [`DDPMScheduler`], or [`DDIMScheduler`]. - """ - - def __init__(self, unet, scheduler): - super().__init__() - - # make sure scheduler can always be converted to DDIM - scheduler = DDIMScheduler.from_config(scheduler.config) - - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - eta: float = 0.0, - num_inference_steps: int = 50, - use_clipped_model_output: Optional[bool] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - ) -> Union[ImagePipelineOutput, Tuple]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - The number of images to generate. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - eta (`float`, *optional*, defaults to 0.0): - The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - use_clipped_model_output (`bool`, *optional*, defaults to `None`): - if `True` or `False`, see documentation for `DDIMScheduler.step`. If `None`, nothing is passed - downstream to the scheduler. So use `None` for schedulers which don't support this argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is - True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. - """ - - # Sample gaussian noise to begin loop - if isinstance(self.unet.sample_size, int): - image_shape = (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size) - else: - image_shape = (batch_size, self.unet.in_channels, *self.unet.sample_size) - - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - image = randn_tensor(image_shape, generator=generator, device=self.device, dtype=self.unet.dtype) - - # set step values - self.scheduler.set_timesteps(num_inference_steps) - - for t in self.progress_bar(self.scheduler.timesteps): - # 1. predict noise model_output - model_output = self.unet(image, t).sample - - # 2. predict previous mean of image x_t-1 and add variance depending on eta - # eta corresponds to η in paper and should be between [0, 1] - # do x_t -> x_t-1 - image = self.scheduler.step( - model_output, t, image, eta=eta, use_clipped_model_output=use_clipped_model_output, generator=generator - ).prev_sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_vq_diffusion.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_vq_diffusion.py deleted file mode 100644 index b92722e4d462ca675bbf11230c1c39810de48b6e..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_vq_diffusion.py +++ /dev/null @@ -1,496 +0,0 @@ -# Copyright 2023 Microsoft and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .scheduling_utils import SchedulerMixin - - -@dataclass -class VQDiffusionSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - Computed sample x_{t-1} of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - """ - - prev_sample: torch.LongTensor - - -def index_to_log_onehot(x: torch.LongTensor, num_classes: int) -> torch.FloatTensor: - """ - Convert batch of vector of class indices into batch of log onehot vectors - - Args: - x (`torch.LongTensor` of shape `(batch size, vector length)`): - Batch of class indices - - num_classes (`int`): - number of classes to be used for the onehot vectors - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes, vector length)`: - Log onehot vectors - """ - x_onehot = F.one_hot(x, num_classes) - x_onehot = x_onehot.permute(0, 2, 1) - log_x = torch.log(x_onehot.float().clamp(min=1e-30)) - return log_x - - -def gumbel_noised(logits: torch.FloatTensor, generator: Optional[torch.Generator]) -> torch.FloatTensor: - """ - Apply gumbel noise to `logits` - """ - uniform = torch.rand(logits.shape, device=logits.device, generator=generator) - gumbel_noise = -torch.log(-torch.log(uniform + 1e-30) + 1e-30) - noised = gumbel_noise + logits - return noised - - -def alpha_schedules(num_diffusion_timesteps: int, alpha_cum_start=0.99999, alpha_cum_end=0.000009): - """ - Cumulative and non-cumulative alpha schedules. - - See section 4.1. - """ - att = ( - np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (alpha_cum_end - alpha_cum_start) - + alpha_cum_start - ) - att = np.concatenate(([1], att)) - at = att[1:] / att[:-1] - att = np.concatenate((att[1:], [1])) - return at, att - - -def gamma_schedules(num_diffusion_timesteps: int, gamma_cum_start=0.000009, gamma_cum_end=0.99999): - """ - Cumulative and non-cumulative gamma schedules. - - See section 4.1. - """ - ctt = ( - np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (gamma_cum_end - gamma_cum_start) - + gamma_cum_start - ) - ctt = np.concatenate(([0], ctt)) - one_minus_ctt = 1 - ctt - one_minus_ct = one_minus_ctt[1:] / one_minus_ctt[:-1] - ct = 1 - one_minus_ct - ctt = np.concatenate((ctt[1:], [0])) - return ct, ctt - - -class VQDiffusionScheduler(SchedulerMixin, ConfigMixin): - """ - The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image. - - The VQ-diffusion scheduler converts the transformer's output into a sample for the unnoised image at the previous - diffusion timestep. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2111.14822 - - Args: - num_vec_classes (`int`): - The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked - latent pixel. - - num_train_timesteps (`int`): - Number of diffusion steps used to train the model. - - alpha_cum_start (`float`): - The starting cumulative alpha value. - - alpha_cum_end (`float`): - The ending cumulative alpha value. - - gamma_cum_start (`float`): - The starting cumulative gamma value. - - gamma_cum_end (`float`): - The ending cumulative gamma value. - """ - - order = 1 - - @register_to_config - def __init__( - self, - num_vec_classes: int, - num_train_timesteps: int = 100, - alpha_cum_start: float = 0.99999, - alpha_cum_end: float = 0.000009, - gamma_cum_start: float = 0.000009, - gamma_cum_end: float = 0.99999, - ): - self.num_embed = num_vec_classes - - # By convention, the index for the mask class is the last class index - self.mask_class = self.num_embed - 1 - - at, att = alpha_schedules(num_train_timesteps, alpha_cum_start=alpha_cum_start, alpha_cum_end=alpha_cum_end) - ct, ctt = gamma_schedules(num_train_timesteps, gamma_cum_start=gamma_cum_start, gamma_cum_end=gamma_cum_end) - - num_non_mask_classes = self.num_embed - 1 - bt = (1 - at - ct) / num_non_mask_classes - btt = (1 - att - ctt) / num_non_mask_classes - - at = torch.tensor(at.astype("float64")) - bt = torch.tensor(bt.astype("float64")) - ct = torch.tensor(ct.astype("float64")) - log_at = torch.log(at) - log_bt = torch.log(bt) - log_ct = torch.log(ct) - - att = torch.tensor(att.astype("float64")) - btt = torch.tensor(btt.astype("float64")) - ctt = torch.tensor(ctt.astype("float64")) - log_cumprod_at = torch.log(att) - log_cumprod_bt = torch.log(btt) - log_cumprod_ct = torch.log(ctt) - - self.log_at = log_at.float() - self.log_bt = log_bt.float() - self.log_ct = log_ct.float() - self.log_cumprod_at = log_cumprod_at.float() - self.log_cumprod_bt = log_cumprod_bt.float() - self.log_cumprod_ct = log_cumprod_ct.float() - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy()) - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - - device (`str` or `torch.device`): - device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on. - """ - self.num_inference_steps = num_inference_steps - timesteps = np.arange(0, self.num_inference_steps)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps).to(device) - - self.log_at = self.log_at.to(device) - self.log_bt = self.log_bt.to(device) - self.log_ct = self.log_ct.to(device) - self.log_cumprod_at = self.log_cumprod_at.to(device) - self.log_cumprod_bt = self.log_cumprod_bt.to(device) - self.log_cumprod_ct = self.log_cumprod_ct.to(device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: torch.long, - sample: torch.LongTensor, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[VQDiffusionSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the - docstring for `self.q_posterior` for more in depth docs on how Equation (11) is computed. - - Args: - log_p_x_0: (`torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`): - The log probabilities for the predicted classes of the initial latent pixels. Does not include a - prediction for the masked class as the initial unnoised image cannot be masked. - - t (`torch.long`): - The timestep that determines which transition matrices are used. - - x_t: (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t` - - generator: (`torch.Generator` or None): - RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from. - - return_dict (`bool`): - option for returning tuple rather than VQDiffusionSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. - When returning a tuple, the first element is the sample tensor. - """ - if timestep == 0: - log_p_x_t_min_1 = model_output - else: - log_p_x_t_min_1 = self.q_posterior(model_output, sample, timestep) - - log_p_x_t_min_1 = gumbel_noised(log_p_x_t_min_1, generator) - - x_t_min_1 = log_p_x_t_min_1.argmax(dim=1) - - if not return_dict: - return (x_t_min_1,) - - return VQDiffusionSchedulerOutput(prev_sample=x_t_min_1) - - def q_posterior(self, log_p_x_0, x_t, t): - """ - Calculates the log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11). - - Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only - forward probabilities. - - Equation (11) stated in terms of forward probabilities via Equation (5): - - Where: - - the sum is over x_0 = {C_0 ... C_{k-1}} (classes for x_0) - - p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) - - Args: - log_p_x_0: (`torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`): - The log probabilities for the predicted classes of the initial latent pixels. Does not include a - prediction for the masked class as the initial unnoised image cannot be masked. - - x_t: (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t` - - t (torch.Long): - The timestep that determines which transition matrix is used. - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes, num latent pixels)`: - The log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11). - """ - log_onehot_x_t = index_to_log_onehot(x_t, self.num_embed) - - log_q_x_t_given_x_0 = self.log_Q_t_transitioning_to_known_class( - t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=True - ) - - log_q_t_given_x_t_min_1 = self.log_Q_t_transitioning_to_known_class( - t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=False - ) - - # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) - # . . . - # . . . - # . . . - # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) - q = log_p_x_0 - log_q_x_t_given_x_0 - - # sum_0 = p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}), ... , - # sum_n = p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) - q_log_sum_exp = torch.logsumexp(q, dim=1, keepdim=True) - - # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0 ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n - # . . . - # . . . - # . . . - # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0 ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n - q = q - q_log_sum_exp - - # (p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1} - # . . . - # . . . - # . . . - # (p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1} - # c_cumulative_{t-1} ... c_cumulative_{t-1} - q = self.apply_cumulative_transitions(q, t - 1) - - # ((p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_0 ... ((p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_n - # . . . - # . . . - # . . . - # ((p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_0 ... ((p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_n - # c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0 ... c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0 - log_p_x_t_min_1 = q + log_q_t_given_x_t_min_1 + q_log_sum_exp - - # For each column, there are two possible cases. - # - # Where: - # - sum(p_n(x_0))) is summing over all classes for x_0 - # - C_i is the class transitioning from (not to be confused with c_t and c_cumulative_t being used for gamma's) - # - C_j is the class transitioning to - # - # 1. x_t is masked i.e. x_t = c_k - # - # Simplifying the expression, the column vector is: - # . - # . - # . - # (c_t / c_cumulative_t) * (a_cumulative_{t-1} * p_n(x_0 = C_i | x_t) + b_cumulative_{t-1} * sum(p_n(x_0))) - # . - # . - # . - # (c_cumulative_{t-1} / c_cumulative_t) * sum(p_n(x_0)) - # - # From equation (11) stated in terms of forward probabilities, the last row is trivially verified. - # - # For the other rows, we can state the equation as ... - # - # (c_t / c_cumulative_t) * [b_cumulative_{t-1} * p(x_0=c_0) + ... + (a_cumulative_{t-1} + b_cumulative_{t-1}) * p(x_0=C_i) + ... + b_cumulative_{k-1} * p(x_0=c_{k-1})] - # - # This verifies the other rows. - # - # 2. x_t is not masked - # - # Simplifying the expression, there are two cases for the rows of the column vector, where C_j = C_i and where C_j != C_i: - # . - # . - # . - # C_j != C_i: b_t * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / b_cumulative_t) * p_n(x_0 = C_i) + ... + (b_cumulative_{t-1} / (a_cumulative_t + b_cumulative_t)) * p_n(c_0=C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1})) - # . - # . - # . - # C_j = C_i: (a_t + b_t) * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / (a_cumulative_t + b_cumulative_t)) * p_n(x_0 = C_i = C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1})) - # . - # . - # . - # 0 - # - # The last row is trivially verified. The other rows can be verified by directly expanding equation (11) stated in terms of forward probabilities. - return log_p_x_t_min_1 - - def log_Q_t_transitioning_to_known_class( - self, *, t: torch.int, x_t: torch.LongTensor, log_onehot_x_t: torch.FloatTensor, cumulative: bool - ): - """ - Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each - latent pixel in `x_t`. - - See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix - is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs. - - Args: - t (torch.Long): - The timestep that determines which transition matrix is used. - - x_t (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t`. - - log_onehot_x_t (`torch.FloatTensor` of shape `(batch size, num classes, num latent pixels)`): - The log one-hot vectors of `x_t` - - cumulative (`bool`): - If cumulative is `False`, we use the single step transition matrix `t-1`->`t`. If cumulative is `True`, - we use the cumulative transition matrix `0`->`t`. - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`: - Each _column_ of the returned matrix is a _row_ of log probabilities of the complete probability - transition matrix. - - When non cumulative, returns `self.num_classes - 1` rows because the initial latent pixel cannot be - masked. - - Where: - - `q_n` is the probability distribution for the forward process of the `n`th latent pixel. - - C_0 is a class of a latent pixel embedding - - C_k is the class of the masked latent pixel - - non-cumulative result (omitting logarithms): - ``` - q_0(x_t | x_{t-1} = C_0) ... q_n(x_t | x_{t-1} = C_0) - . . . - . . . - . . . - q_0(x_t | x_{t-1} = C_k) ... q_n(x_t | x_{t-1} = C_k) - ``` - - cumulative result (omitting logarithms): - ``` - q_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) - . . . - . . . - . . . - q_0_cumulative(x_t | x_0 = C_{k-1}) ... q_n_cumulative(x_t | x_0 = C_{k-1}) - ``` - """ - if cumulative: - a = self.log_cumprod_at[t] - b = self.log_cumprod_bt[t] - c = self.log_cumprod_ct[t] - else: - a = self.log_at[t] - b = self.log_bt[t] - c = self.log_ct[t] - - if not cumulative: - # The values in the onehot vector can also be used as the logprobs for transitioning - # from masked latent pixels. If we are not calculating the cumulative transitions, - # we need to save these vectors to be re-appended to the final matrix so the values - # aren't overwritten. - # - # `P(x_t!=mask|x_{t-1=mask}) = 0` and 0 will be the value of the last row of the onehot vector - # if x_t is not masked - # - # `P(x_t=mask|x_{t-1=mask}) = 1` and 1 will be the value of the last row of the onehot vector - # if x_t is masked - log_onehot_x_t_transitioning_from_masked = log_onehot_x_t[:, -1, :].unsqueeze(1) - - # `index_to_log_onehot` will add onehot vectors for masked pixels, - # so the default one hot matrix has one too many rows. See the doc string - # for an explanation of the dimensionality of the returned matrix. - log_onehot_x_t = log_onehot_x_t[:, :-1, :] - - # this is a cheeky trick to produce the transition probabilities using log one-hot vectors. - # - # Don't worry about what values this sets in the columns that mark transitions - # to masked latent pixels. They are overwrote later with the `mask_class_mask`. - # - # Looking at the below logspace formula in non-logspace, each value will evaluate to either - # `1 * a + b = a + b` where `log_Q_t` has the one hot value in the column - # or - # `0 * a + b = b` where `log_Q_t` has the 0 values in the column. - # - # See equation 7 for more details. - log_Q_t = (log_onehot_x_t + a).logaddexp(b) - - # The whole column of each masked pixel is `c` - mask_class_mask = x_t == self.mask_class - mask_class_mask = mask_class_mask.unsqueeze(1).expand(-1, self.num_embed - 1, -1) - log_Q_t[mask_class_mask] = c - - if not cumulative: - log_Q_t = torch.cat((log_Q_t, log_onehot_x_t_transitioning_from_masked), dim=1) - - return log_Q_t - - def apply_cumulative_transitions(self, q, t): - bsz = q.shape[0] - a = self.log_cumprod_at[t] - b = self.log_cumprod_bt[t] - c = self.log_cumprod_ct[t] - - num_latent_pixels = q.shape[2] - c = c.expand(bsz, 1, num_latent_pixels) - - q = (q + a).logaddexp(b) - q = torch.cat((q, c), dim=1) - - return q diff --git a/spaces/deepdoctection/deepdoctection/README.md b/spaces/deepdoctection/deepdoctection/README.md deleted file mode 100644 index 9dab12e438a29d6bd0434289fd96d3e1c22bd76f..0000000000000000000000000000000000000000 --- a/spaces/deepdoctection/deepdoctection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Deepdoctection -emoji: 🏃 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dilums/sentence-similarity/components/Main/Results/index.tsx b/spaces/dilums/sentence-similarity/components/Main/Results/index.tsx deleted file mode 100644 index 944e5178b034d991f5c110f201bc3d3f1ab7a471..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/Main/Results/index.tsx +++ /dev/null @@ -1,3 +0,0 @@ -import Results from './Results'; - -export default Results; \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/core/anchor/utils.py b/spaces/dineshreddy/WALT/mmdet/core/anchor/utils.py deleted file mode 100644 index ab9b53f37f7be1f52fe63c5e53df64ac1303b9e0..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/trident_roi_head.py b/spaces/dineshreddy/WALT/mmdet/models/roi_heads/trident_roi_head.py deleted file mode 100644 index 245569e50b45cc8e21ba8e7210edf4bd0c7f27c5..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/trident_roi_head.py +++ /dev/null @@ -1,119 +0,0 @@ -import torch -from mmcv.ops import batched_nms - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes, - multiclass_nms) -from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead -from ..builder import HEADS - - -@HEADS.register_module() -class TridentRoIHead(StandardRoIHead): - """Trident roi head. - - Args: - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - """ - - def __init__(self, num_branch, test_branch_idx, **kwargs): - self.num_branch = num_branch - self.test_branch_idx = test_branch_idx - super(TridentRoIHead, self).__init__(**kwargs) - - def merge_trident_bboxes(self, trident_det_bboxes, trident_det_labels): - """Merge bbox predictions of each branch.""" - if trident_det_bboxes.numel() == 0: - det_bboxes = trident_det_bboxes.new_zeros((0, 5)) - det_labels = trident_det_bboxes.new_zeros((0, ), dtype=torch.long) - else: - nms_bboxes = trident_det_bboxes[:, :4] - nms_scores = trident_det_bboxes[:, 4].contiguous() - nms_inds = trident_det_labels - nms_cfg = self.test_cfg['nms'] - det_bboxes, keep = batched_nms(nms_bboxes, nms_scores, nms_inds, - nms_cfg) - det_labels = trident_det_labels[keep] - if self.test_cfg['max_per_img'] > 0: - det_labels = det_labels[:self.test_cfg['max_per_img']] - det_bboxes = det_bboxes[:self.test_cfg['max_per_img']] - - return det_bboxes, det_labels - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation as follows: - - 1. Compute prediction bbox and label per branch. - 2. Merge predictions of each branch according to scores of - bboxes, i.e., bboxes with higher score are kept to give - top-k prediction. - """ - assert self.with_bbox, 'Bbox head must be implemented.' - det_bboxes_list, det_labels_list = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - num_branch = self.num_branch if self.test_branch_idx == -1 else 1 - for _ in range(len(det_bboxes_list)): - if det_bboxes_list[_].shape[0] == 0: - det_bboxes_list[_] = det_bboxes_list[_].new_empty((0, 5)) - det_bboxes, det_labels = [], [] - for i in range(len(img_metas) // num_branch): - det_result = self.merge_trident_bboxes( - torch.cat(det_bboxes_list[i * num_branch:(i + 1) * - num_branch]), - torch.cat(det_labels_list[i * num_branch:(i + 1) * - num_branch])) - det_bboxes.append(det_result[0]) - det_labels.append(det_result[1]) - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - return bbox_results - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - trident_bboxes, trident_scores = [], [] - for branch_idx in range(len(proposal_list)): - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - trident_bboxes.append(bboxes) - trident_scores.append(scores) - - aug_bboxes.append(torch.cat(trident_bboxes, 0)) - aug_scores.append(torch.cat(trident_scores, 0)) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/doctorsafe/mychat/README.md b/spaces/doctorsafe/mychat/README.md deleted file mode 100644 index 818380a4f417a6b1c5b9dbbad6e2cd254f9402e2..0000000000000000000000000000000000000000 --- a/spaces/doctorsafe/mychat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mychat -emoji: 🚀 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dolceschokolade/chatbot-mini/components/Sidebar/index.ts b/spaces/dolceschokolade/chatbot-mini/components/Sidebar/index.ts deleted file mode 100644 index e842a8591f87200d592241de7850dab18224f4c0..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/components/Sidebar/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './Sidebar'; diff --git a/spaces/dolceschokolade/chatbot-mini/types/export.ts b/spaces/dolceschokolade/chatbot-mini/types/export.ts deleted file mode 100644 index 655e3f7b7a47012fb7fa96456304f024ce1f0c21..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/export.ts +++ /dev/null @@ -1,45 +0,0 @@ -import { Conversation, Message } from './chat'; -import { FolderInterface } from './folder'; -import { OpenAIModel } from './openai'; -import { Prompt } from './prompt'; - -export type SupportedExportFormats = - | ExportFormatV1 - | ExportFormatV2 - | ExportFormatV3 - | ExportFormatV4; -export type LatestExportFormat = ExportFormatV4; - -//////////////////////////////////////////////////////////////////////////////////////////// -interface ConversationV1 { - id: number; - name: string; - messages: Message[]; -} - -export type ExportFormatV1 = ConversationV1[]; - -//////////////////////////////////////////////////////////////////////////////////////////// -interface ChatFolder { - id: number; - name: string; -} - -export interface ExportFormatV2 { - history: Conversation[] | null; - folders: ChatFolder[] | null; -} - -//////////////////////////////////////////////////////////////////////////////////////////// -export interface ExportFormatV3 { - version: 3; - history: Conversation[]; - folders: FolderInterface[]; -} - -export interface ExportFormatV4 { - version: 4; - history: Conversation[]; - folders: FolderInterface[]; - prompts: Prompt[]; -} diff --git a/spaces/dorkai/singpt/app.py b/spaces/dorkai/singpt/app.py deleted file mode 100644 index 5fa7ad38bf6c6c533d170074690124231000532d..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import os -os.system('python download-model.py PygmalionAI/pygmalion-350m --branch main') -# os.system('python download-model.py waifu-workshop/pygmalion-6b --branch original-sharded') -os.system('python server.py --cpu --cai-chat --model pygmalion-350m') \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/llama.cpp-models.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/llama.cpp-models.md deleted file mode 100644 index 57fbf61375c9b999ef862ed2b8618317d9ef34ad..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/llama.cpp-models.md +++ /dev/null @@ -1,17 +0,0 @@ -## Using llama.cpp in the web UI - -#### Pre-converted models - -Simply place the model in the `models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`. - -#### Convert LLaMA yourself - -Follow the instructions in the llama.cpp README to generate the `ggml-model-q4_0.bin` file: https://github.com/ggerganov/llama.cpp#usage - -## Performance - -This was the performance of llama-7b int4 on my i5-12400F: - -> Output generated in 33.07 seconds (6.05 tokens/s, 200 tokens, context 17) - -You can change the number of threads with `--threads N`. diff --git a/spaces/dpc/mmstts/README.md b/spaces/dpc/mmstts/README.md deleted file mode 100644 index 4d03035d4264479f6fbad0503aaa444da0501649..0000000000000000000000000000000000000000 --- a/spaces/dpc/mmstts/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Massively Multilingual Speech (MMS) - Text To Speech -emoji: 🌍 -colorFrom: yellow -colorTo: gray -sdk: gradio -app_file: app.py -pinned: true ---- - -## Info -Text to speech for more than 1000+ languages - Using [fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/mms/README.md) MMS TTS and [ttsmms](https://github.com/wannaphong/ttsmms) wrapper. - -+ Language Iso code list (`lang_code.txt`) is adapted from -https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html - -The dropdown list is quite long, so I have placed some of my friends' frequently used languages at the top. The other 1000+ languages are sorted alphabetically. - -+ `mm_num2word.py` is adapted from https://github.com/hpbyte/Myanmar_Number_to_Words - -+ Other dependencies, please prefer to the `requirements.txt` file. \ No newline at end of file diff --git a/spaces/dragonSwing/annotate-anything/style.css b/spaces/dragonSwing/annotate-anything/style.css deleted file mode 100644 index 762d2ba93ffdd82cb4fac6d8b4c3d50244d2b6d7..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/style.css +++ /dev/null @@ -1,11 +0,0 @@ -.container { - max-width: 1368px; - margin-left: auto; - margin-right: auto; - } - - #row-flex { - display: flex; - align-items: center; - justify-content: center; - } diff --git a/spaces/dvc890/go-chatgpt-api/api/platform/constant.go b/spaces/dvc890/go-chatgpt-api/api/platform/constant.go deleted file mode 100644 index 16470124e800242fe57ea7a7e9911beb0a1427c0..0000000000000000000000000000000000000000 --- a/spaces/dvc890/go-chatgpt-api/api/platform/constant.go +++ /dev/null @@ -1,35 +0,0 @@ -package platform - -import "github.com/linweiyuan/go-chatgpt-api/api" - -//goland:noinspection SpellCheckingInspection -const ( - apiUrl = "https://api.openai.com" - - apiListModels = apiUrl + "/v1/models" - apiRetrieveModel = apiUrl + "/v1/models/%s" - apiCreateCompletions = apiUrl + "/v1/completions" - apiCreataeChatCompletions = apiUrl + "/v1/chat/completions" - apiCreateEdit = apiUrl + "/v1/edits" - apiCreateImage = apiUrl + "/v1/images/generations" - apiCreateEmbeddings = apiUrl + "/v1/embeddings" - apiListFiles = apiUrl + "/v1/files" - apiCreateModeration = apiUrl + "/v1/moderations" - - apiGetCreditGrants = apiUrl + "/dashboard/billing/credit_grants" - apiGetSubscription = apiUrl + "/dashboard/billing/subscription" - apiGetApiKeys = apiUrl + "/dashboard/user/api_keys" - - platformAuthClientID = "DRivsnm2Mu42T3KOpqdtwB3NYviHYzwD" - platformAuthAudience = "https://api.openai.com/v1" - platformAuthRedirectURL = "https://platform.openai.com/auth/callback" - platformAuthScope = "openid profile email offline_access" - platformAuthResponseType = "code" - platformAuthGrantType = "authorization_code" - platformAuth0Url = api.Auth0Url + "/authorize?" - getTokenUrl = api.Auth0Url + "/oauth/token" - auth0Client = "eyJuYW1lIjoiYXV0aDAtc3BhLWpzIiwidmVyc2lvbiI6IjEuMjEuMCJ9" // '{"name":"auth0-spa-js","version":"1.21.0"}' - auth0LogoutUrl = api.Auth0Url + "/v2/logout?returnTo=https%3A%2F%2Fplatform.openai.com%2Floggedout&client_id=" + platformAuthClientID + "&auth0Client=" + auth0Client - dashboardLoginUrl = "https://api.openai.com/dashboard/onboarding/login" - getSessionKeyErrorMessage = "Failed to get session key." -) diff --git a/spaces/eeemef/demo-cats-vs-dogs/README.md b/spaces/eeemef/demo-cats-vs-dogs/README.md deleted file mode 100644 index f72df41c7dbb9094a23ce125e76aab2c902360e9..0000000000000000000000000000000000000000 --- a/spaces/eeemef/demo-cats-vs-dogs/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Demo Cats Vs Dogs -emoji: ⚡ -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/elplaguister/Yuuka_TTS/src/text/cleaners.py b/spaces/elplaguister/Yuuka_TTS/src/text/cleaners.py deleted file mode 100644 index 26a25783cc9872ad4a8a99d16b2a3036d04cb7a7..0000000000000000000000000000000000000000 --- a/spaces/elplaguister/Yuuka_TTS/src/text/cleaners.py +++ /dev/null @@ -1,17 +0,0 @@ -import re - -def japanese_cleaners(text): - from src.text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - if len(text) == 0 or re.match('[A-Za-z]', text[-1]): - text += '.' - return text - - -def japanese_cleaners2(text): - text = text.replace('・・・', '…').replace('・', ' ') - text = japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') \ - .replace('(', '').replace(')', '') \ - .replace('[', '').replace(']', '') \ - .replace('*', ' ').replace('{', '').replace('}', '') - return text \ No newline at end of file diff --git a/spaces/enzostvs/stable-diffusion-tpu/utils/useUser.ts b/spaces/enzostvs/stable-diffusion-tpu/utils/useUser.ts deleted file mode 100644 index ec29e2387fc1b239c2762bd60ef7ff761bddf45e..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/utils/useUser.ts +++ /dev/null @@ -1,85 +0,0 @@ -"use client"; - -import { useCookie, useUpdateEffect } from "react-use"; -import { useQuery } from "@tanstack/react-query"; -import { useState } from "react"; - -export const useUser = () => { - const [value, setValue, remove] = useCookie("auth_hf_token"); - - const { - data: user, - refetch, - isFetching: loading, - remove: clear, - }: any = useQuery( - ["user.me"], - async () => { - if (!value) return null; - if (user) return user; - const request = await fetch("/api/me", { - method: "GET", - headers: { - Authorization: `Bearer ${value}`, - }, - }) - - const res = await request.clone().json().catch(() => ({})); - - if (!res.ok) { - remove(); - clear(); - return null; - } - return res?.user; - }, - { - refetchOnWindowFocus: false, - refetchOnMount: false, - refetchOnReconnect: false, - } - ); - - useUpdateEffect(() => { - if (value) { - refetch() - } - } - , [value]); - - const openWindowLogin = async () => { - const response = await fetch(`/api/login`); - const { ok, redirect } = await response.json(); - if (ok && redirect) { - window.open(redirect, "_blank"); - } - }; - - const getAuthorization = async (code: string) => { - const request = await fetch("/api/auth", { - method: "POST", - body: JSON.stringify({ - code, - }), - }); - const res = await request.clone().json().catch(() => ({})); - if (!res.ok) { - return null; - } - setValue(res.access_token, { - expires: res.experes_in, - sameSite: "none", - path: "/", - secure: true, - }); - } - - return { - openWindowLogin, - user, - refetch, - loading, - token: `Bearer ${value}`, - getAuthorization - } -} \ No newline at end of file diff --git a/spaces/errorok/rvc-models-en-test/infer_pack/models.py b/spaces/errorok/rvc-models-en-test/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/errorok/rvc-models-en-test/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/falterWliame/Face_Mask_Detection/Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212 HOT.md b/spaces/falterWliame/Face_Mask_Detection/Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212 HOT.md deleted file mode 100644 index 8b200fdfcdf6b7f25228d2dc3493b40f7773c0da..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212 HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212


    Download ✫✫✫ https://urlca.com/2uDcct



    -
    -Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212 Equipped Music Slow Motion Tokyo Soundscapes Vol 3 WAV REX212. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/New Solucionario De Transferencia De Calor Jose Manrique.md b/spaces/falterWliame/Face_Mask_Detection/New Solucionario De Transferencia De Calor Jose Manrique.md deleted file mode 100644 index cdc911d5ab327b19e8c5bcbed839b6424d092ca0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/New Solucionario De Transferencia De Calor Jose Manrique.md +++ /dev/null @@ -1,36 +0,0 @@ -

    New Solucionario De Transferencia De Calor Jose Manrique


    Downloadhttps://urlca.com/2uDcky



    -
    -viaje de armeria vera por decisión médica" en ese centro, explicó.Q: - -How to migrate server side.net solution to a different server - -I have a.net solution that has a database server, a web server and a client server. I've been asked to take the solution off of one server and migrate it to another server. Is there a way to do this without having to remake the entire application? - -The way it works is that the client uses the web server to get a page, then uses the server to process the page and then updates a database. - -A: - -Your question isn't clear but I think this is what you are after: - -Basically the steps are: - -Revert the database schema to a previous version, no SQL required - -Copy the DLLs from your old server to the new server - -Copy the files from the web site on the old server to the web site on the new server - -Since this is a relatively straight forward process I'm surprised that you didn't find this information, I would recommend that you add it to your question. - -Nidhi Bharadwaj reports on the Centre’s recent push to reach out to all schools, to ensure they are properly stocked with a range of sanitary products. - -When women on social media began discussing sanitary products not being available in schools, the Kerala government claimed it was just a “problem of awareness”. At that time, in April, a school in Kerala had closed all doors and bathrooms in the bathrooms to ensure privacy. And when the central government first pledged to make sanitary pads and other menstrual products available in schools, it started with just two states in August. - -But now the issue is being taken more seriously. - -On 5 April, schools minister Rajyavardhan Singh Rathore announced that the Union government will go ahead with plans to make sanitary pads available to schoolgirls. Rathore said that he had written to the heads of all schools in the country to ensure they have all the sanitary products that the students need. - -“The department of school education has written to the heads of all schools informing them that the government is going to provide sanitary pads for schoolgirls in all schools and the 4fefd39f24
    -
    -
    -

    diff --git a/spaces/farukozderim/space-building-space-25/app.py b/spaces/farukozderim/space-building-space-25/app.py deleted file mode 100644 index 56df2010331b09515419d4a6960cf98d149cd39a..0000000000000000000000000000000000000000 --- a/spaces/farukozderim/space-building-space-25/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr -name_list = ['spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot', 'spaces/deepklarity/poster2plot'] -interfaces = [gr.Interface.load(name) for name in name_list] -gr.mix.Parallel(*interfaces, title="Title", description="Description").launch() \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Bike Rider Stunts APK Challenge Yourself with Extreme Bike Stunts.md b/spaces/fatiXbelha/sd/Bike Rider Stunts APK Challenge Yourself with Extreme Bike Stunts.md deleted file mode 100644 index d52f957ae426edc517ba4378b0e92a4a362a01d4..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bike Rider Stunts APK Challenge Yourself with Extreme Bike Stunts.md +++ /dev/null @@ -1,104 +0,0 @@ -
    -

    Bike Rider Stunts APK: A Guide to Download and Play the Best Bike Racing Game

    -

    Introduction

    -

    Do you love bike racing games? Do you want to experience the thrill of performing stunts and tricks on your bike? Do you want to challenge yourself with different modes and levels of difficulty? If you answered yes to any of these questions, then you should try Bike Rider Stunts APK, one of the best bike racing games for Android devices.

    -

    What is Bike Rider Stunts APK?

    -

    Bike Rider Stunts APK is an Android game that lets you ride your bike on various tracks and terrains, such as city roads, off-road tracks, ramps, bridges, tunnels, and more. You can perform amazing stunts and tricks on your bike, such as wheelies, flips, jumps, drifts, and more. You can also customize your bike with different colors, stickers, wheels, and engines. You can choose from different modes of gameplay, such as career mode, free ride mode, time trial mode, and endless mode. You can also select from different levels of difficulty, such as easy, medium, hard, and extreme.

    -

    bike rider stunts apk


    Download File ••• https://urllie.com/2uNFFv



    -

    Why should you download and play Bike Rider Stunts APK?

    -

    There are many reasons why you should download and play Bike Rider Stunts APK. Here are some of them:

    -
      -
    • It is free to download and play. You don't need to pay anything to enjoy this game.
    • -
    • It is easy to download and install. You don't need to root your device or use any complicated methods to get this game.
    • -
    • It is fun and addictive. You will never get bored with this game as it offers many challenges and rewards.
    • -
    • It is realistic and immersive. You will feel like you are riding a real bike as the game features realistic graphics, physics, sounds, and controls.
    • -
    • It is suitable for all ages. You can play this game with your friends and family as it is safe and suitable for everyone.
    • -
    -

    How to download and install Bike Rider Stunts APK on your Android device

    -

    If you are convinced that Bike Rider Stunts APK is the game for you, then you might be wondering how to download and install it on your Android device. Don't worry, it is very simple and easy. Just follow these steps:

    -

    Step 1: Enable unknown sources on your device

    -

    Since Bike Rider Stunts APK is not available on the Google Play Store, you need to enable unknown sources on your device to allow the installation of apps from other sources. To do this, go to your device settings > security > unknown sources > enable.

    -

    Step 2: Download the APK file from a trusted source

    -

    The next step is to download the APK file of Bike Rider Stunts from a trusted source. You can use the link below to download it from our website:

    -

    Bike Rider Stunts APK (Android Game) - Free Download

    -

    This link will take you to a page where you can see more information about the game, such as its size, version, rating, screenshots, etc. You can also see the download button at the bottom of the page. Click on it to start downloading the APK file.

    -

    Step 3: Locate and install the APK file on your device

    -

    The final step is to locate

    The final step is to locate and install the APK file on your device. To do this, go to your device file manager > downloads > Bike Rider Stunts APK. Tap on the file to open it and then tap on install. Wait for a few seconds until the installation is complete. You can then launch the game from your app drawer or home screen.

    -

    bike stunt racing : bike games apk
    -bike rider stunts 3d apk
    -bike stunt master : extreme racing apk
    -bike rider stunts : offline games apk
    -bike stunt tricks master apk
    -bike rider stunts : free games apk
    -bike stunt challenge : racing games apk
    -bike rider stunts : simulator apk
    -bike stunt adventure : 3d games apk
    -bike rider stunts : pro apk
    -bike stunt legend : moto racing apk
    -bike rider stunts : hd graphics apk
    -bike stunt mania : crazy games apk
    -bike rider stunts : multiplayer apk
    -bike stunt hero : action games apk
    -bike rider stunts : mod apk
    -bike stunt master 2 : new games apk
    -bike rider stunts : online games apk
    -bike stunt racer : fun games apk
    -bike rider stunts : realistic physics apk
    -bike stunt extreme : best games apk
    -bike rider stunts : unlimited coins apk
    -bike stunt 3d : amazing games apk
    -bike rider stunts : unlock all bikes apk
    -bike stunt trial : hard games apk
    -bike rider stunts : easy controls apk
    -bike stunt championship : top games apk
    -bike rider stunts : latest version apk
    -bike stunt evolution : cool games apk
    -bike rider stunts : no ads apk
    -bike stunt rampage : awesome games apk
    -bike rider stunts : offline mode apk
    -bike stunt master 3 : new levels apk
    -bike rider stunts : high score apk
    -bike stunt fever : addictive games apk
    -bike rider stunts : low mb apk
    -bike stunt daredevil : extreme games apk
    -bike rider stunts : fast download apk
    -bike stunt race 3d : free ride apk
    -bike rider stunts : smooth gameplay apk
    -bike stunt simulator : realistic games apk
    -bike rider stunts : high quality sound apk
    -bike stunt parkour : adventure games apk
    -bike rider stunts : no internet required apk
    -bike stunt master 4 : new bikes apk
    -bike rider stunts : different modes apk
    -bike stunt showdown : racing rivals apk
    -bike rider stunts : various tracks apk
    -bike stunt world : amazing graphics apk

    -

    How to play Bike Rider Stunts APK and enjoy its features

    -

    Now that you have downloaded and installed Bike Rider Stunts APK on your device, you are ready to play and enjoy its features. Here are some tips on how to play the game and make the most of it:

    -

    Choose your bike and customize it

    -

    The first thing you need to do is to choose your bike and customize it according to your preference. You can choose from different types of bikes, such as sports bikes, dirt bikes, choppers, etc. You can also change the color, sticker, wheel, and engine of your bike. You can preview your bike before confirming your choice.

    -

    Select a mode and a level

    -

    The next thing you need to do is to select a mode and a level of gameplay. You can choose from four modes: career mode, free ride mode, time trial mode, and endless mode. Each mode has its own objectives and challenges. You can also choose from different levels of difficulty: easy, medium, hard, and extreme. Each level has its own track and terrain.

    -

    Perform amazing stunts and tricks

    -

    The main attraction of Bike Rider Stunts APK is the ability to perform amazing stunts and tricks on your bike. You can use the buttons on the screen to control your bike's speed, direction, brake, and tilt. You can also use the accelerometer of your device to balance your bike. You can perform stunts such as wheelies, flips, jumps, drifts, and more. You can also use the ramps, bridges, tunnels, and other obstacles on the track to perform more stunts.

    -

    Collect coins and unlock new bikes and levels

    -

    As you play Bike Rider Stunts APK, you can collect coins that are scattered on the track. You can use these coins to buy new bikes and unlock new levels. You can also earn coins by completing missions and achievements in the game. The more coins you have, the more options you have to customize your bike and gameplay.

    -

    Conclusion

    -

    Bike Rider Stunts APK is a great game for bike racing enthusiasts who want to experience the thrill of performing stunts and tricks on their bikes. It is easy to download and install, fun and addictive, realistic and immersive, and suitable for all ages. It offers many features such as different bikes, modes, levels, stunts, coins, etc. It is a game that will keep you entertained for hours.

    -

    If you are looking for a bike racing game that will challenge your skills and creativity, then you should download Bike Rider Stunts APK today. You will not regret it.

    -

    What are you waiting for? Download Bike Rider Stunts APK now and enjoy the best bike racing game ever!

    -

    FAQs

    -
      -
    • Q: Is Bike Rider Stunts APK safe to download and install?
    • -
    • A: Yes, Bike Rider Stunts APK is safe to download and install as long as you use a trusted source like our website. We scan all our files for viruses and malware before uploading them.
    • -
    • Q: How much space does Bike Rider Stunts APK require on my device?
    • -
    • A: Bike Rider Stunts APK requires about 50 MB of space on your device.
    • -
    • Q: Can I play Bike Rider Stunts APK offline?
    • -
    • A: Yes, you can play Bike Rider Stunts APK offline without any internet connection.
    • -
    • Q: Can I share my progress and achievements with my friends?
    • -
    • A: Yes, you can share your progress and achievements with your friends via social media platforms such as Facebook, Twitter, Instagram, etc.
    • -
    • Q: Can I contact the developers of Bike Rider Stunts APK for feedback or support?
    • -
    • A: Yes, you can contact the developers of Bike Rider Stunts APK via their email address: bikeriderstunts@gmail.com.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Call of Duty Mobile APK Vision - The Ultimate Guide to Install and Play.md b/spaces/fatiXbelha/sd/Call of Duty Mobile APK Vision - The Ultimate Guide to Install and Play.md deleted file mode 100644 index eb48572aa46aebeea5dcb841a511dbb9a71aa44e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Call of Duty Mobile APK Vision - The Ultimate Guide to Install and Play.md +++ /dev/null @@ -1,170 +0,0 @@ -
    -

    Download Call of Duty Mobile APK Vision: How to Play the Latest Version of COD Mobile on Android

    -

    Call of Duty Mobile is one of the most popular and exciting multiplayer FPS games on mobile devices. It offers a thrilling and immersive gaming experience with high-quality graphics, realistic sound effects, and smooth gameplay. If you are a fan of Call of Duty Mobile, you might be wondering how to download and play the latest version of the game, which is called Call of Duty Mobile APK Vision. In this article, we will explain what Call of Duty Mobile APK Vision is, how to download and install it on your Android device, and how to play it with the best tips and tricks.

    -

    What is Call of Duty Mobile APK Vision?

    -

    Call of Duty Mobile APK Vision is the unofficial name given to the latest update of Call of Duty Mobile, which is also known as Season 5: Tropical Vision. This update was released on June 2nd, 2023, and it brings a lot of new content and features to the game, such as:

    -

    download call of duty mobile apk vision


    Download ->>->>->> https://urllie.com/2uNGoE



    -

    The features of COD Mobile Season 5: Tropical Vision

    -
      -
    • A new map called Docks, which is a small and fast-paced map set in a shipyard.
    • -
    • A new game mode called Cranked: Confirmed, which is a combination of Cranked and Kill Confirmed modes. In this mode, you have to kill enemies and collect their dog tags before your timer runs out.
    • -
    • A new battle pass with 50 tiers of rewards, including new characters, weapons, skins, emotes, and more.
    • -
    • Two new weapons: the CR-56 AMAX assault rifle and the Shorty shotgun.
    • -
    • A new tactical grenade called Echo Grenade, which emits a sonic pulse that reveals enemies' locations.
    • -
    • Performance updates, bug fixes, and balance changes.
    • -
    -

    The benefits of downloading the APK and OBB files

    -

    Although the update is available for all users globally, it might still take time to get rolled out for each region via the official Google Play Store. The APK and OBB files allow Android users to install the update directly on their devices without delay. Some of the benefits of downloading the APK and OBB files are:

    -
      -
    • You can enjoy the new features and content of COD Mobile Season 5 as soon as possible.
    • -
    • You can avoid any potential errors or issues that might occur while downloading or installing the update from the Google Play Store.
    • -
    • You can save some storage space on your device by deleting the old version of COD Mobile after installing the new one.
    • -
    -

    How to download and install Call of Duty Mobile APK Vision?

    -

    If you want to download and install Call of Duty Mobile APK Vision on your Android device, you need to follow these steps:

    -

    Step 1: Download the APK and OBB files from a reliable source

    -

    The first thing you need to do is to download the APK and OBB files from a reliable source. You can use the links provided below, which are unofficial source links. The APK file is around 90 MB, and the OBB and patch files are roughly 2 GB combined. Therefore, make sure you have at least 2.5 GB of free space on your device before downloading the files. You can also use a Wi-Fi connection to speed up the download process.

    -

    Here are the links to download the APK and OBB files:

    - - - - - - - - - - - - - - - - - - - - - -
    File NameSizeDownload Link
    Call of Duty Mobile APK Vision90 MB
    Call of Duty Mobile OBB Vision1.9 GB
    Call of Duty Mobile Patch Vision100 MB
    -

    Step 2: Enable installation from unknown sources on your device

    -

    The next thing you need to do is to enable installation from unknown sources on your device. This will allow you to install the APK file that you downloaded from an unofficial source. To do this, you need to follow these steps:

    -
      -
    • Go to your device's Settings and tap on Security or Privacy.
    • -
    • Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
    • -
    • A warning message will pop up, asking you to confirm your action. Tap on OK or Allow to proceed.
    • -
    -

    Step 3: Install the APK file and move the OBB file to the correct folder

    -

    The third thing you need to do is to install the APK file and move the OBB file to the correct folder. To do this, you need to follow these steps:

    -
      -
    • Locate the APK file that you downloaded in your device's file manager or downloads folder and tap on it.
    • -
    • A prompt will appear, asking you to install the app. Tap on Install and wait for the installation to complete.
    • -
    • Do not open the app yet. Instead, go back to your file manager or downloads folder and find the OBB and patch files that you downloaded.
    • -
    • Extract the OBB and patch files using a file extractor app such as ZArchiver or RAR.
    • -
    • You will get a folder named com.activision.callofduty.shooter. Move this folder to Android/OBB in your device's internal storage.
    • -
    • If there is already a folder with the same name in Android/OBB, delete it first before moving the new one.
    • -
    -

    Step 4: Launch the game and enjoy COD Mobile Season 5

    -

    The final thing you need to do is to launch the game and enjoy COD Mobile Season 5. To do this, you need to follow these steps:

    -
      -
    • Go to your device's app drawer or home screen and tap on the Call of Duty Mobile icon.
    • -
    • The game will start and ask you to log in with your account. You can use your existing account or create a new one.
    • -
    • The game will verify your files and download some additional data if needed.
    • -
    • Once the verification and download are done, you can access the game's main menu and start playing COD Mobile Season 5.
    • -
    -

    How to play Call of Duty Mobile APK Vision?

    -

    Now that you have downloaded and installed Call of Duty Mobile APK Vision, you might be wondering how to play it and what are the new features and content that it offers. In this section, we will give you some information and tips on how to play Call of Duty Mobile APK Vision, such as:

    -

    How to download call of duty mobile apk vision for android
    -Call of duty mobile apk vision free download link
    -Call of duty mobile apk vision latest version update
    -Call of duty mobile apk vision season 5 tropical vision
    -Call of duty mobile apk vision offline mode
    -Call of duty mobile apk vision mod menu
    -Call of duty mobile apk vision hack unlimited money
    -Call of duty mobile apk vision cheats and tips
    -Call of duty mobile apk vision gameplay and review
    -Call of duty mobile apk vision best settings and graphics
    -Call of duty mobile apk vision new weapons and maps
    -Call of duty mobile apk vision zombies mode
    -Call of duty mobile apk vision battle royale mode
    -Call of duty mobile apk vision multiplayer mode
    -Call of duty mobile apk vision controller support
    -Call of duty mobile apk vision emulator for pc
    -Call of duty mobile apk vision system requirements
    -Call of duty mobile apk vision download size and data usage
    -Call of duty mobile apk vision redeem codes and rewards
    -Call of duty mobile apk vision error and fix
    -Call of duty mobile apk vision comparison with other cod games
    -Call of duty mobile apk vision fan art and wallpapers
    -Call of duty mobile apk vision community and forums
    -Call of duty mobile apk vision news and updates
    -Call of duty mobile apk vision trailer and launch date
    -Download call of duty mobile apk vision from uptodown
    -Download call of duty mobile apk vision from google play store
    -Download call of duty mobile apk vision from app store
    -Download call of duty mobile apk vision from official website
    -Download call of duty mobile apk vision from mediafire
    -Download call of duty mobile apk vision from mega.nz
    -Download call of duty mobile apk vision from apkpure
    -Download call of duty mobile apk vision from apkmirror
    -Download call of duty mobile apk vision from apptoko
    -Download call of duty mobile apk vision from rexdl
    -Download call of duty mobile apk vision from revdl
    -Download call of duty mobile apk vision from happymod
    -Download call of duty mobile apk vision from an1.com
    -Download call of duty mobile apk vision from android1.com
    -Download call of duty mobile apk vision from mob.org

    -

    The game modes and maps available in COD Mobile Season 5

    -

    COD Mobile Season 5 offers a variety of game modes and maps for both multiplayer and battle royale modes. Here are some of them:

    -
      -
    • Docks: This is a new map that is exclusive for multiplayer mode. It is a small and fast-paced map set in a shipyard. It has multiple levels and tight corners, making it ideal for close-range combat and shotguns.
    • -
    • Cranked: Confirmed: This is a new game mode that is available for multiplayer mode. It is a combination of Cranked and Kill Confirmed modes. In this mode, you have to kill enemies and collect their dog tags before your timer runs out. If your timer reaches zero, you will explode and die. Killing enemies and collecting dog tags will reset your timer and give you extra points.
    • -
    • Duga: This is a new map that is available for battle royale mode. It is a large and diverse map that features various locations such as a radar station, a missile silo, a chemical plant, and a train station. It also has dynamic weather and radiation zones that can affect your gameplay.
    • -
    • Clash: This is a new game mode that is available for battle royale mode. It is a team deathmatch mode that pits two teams of 50 players against each other in a small area of the map. The first team to reach 150 kills wins the match.
    • -
    -

    The new weapons and attachments in COD Mobile Season 5

    -

    COD Mobile Season 5 also introduces two new weapons and a new tactical grenade that you can use in both multiplayer and battle royale modes. Here are some details about them:

    -
      -
    • CR-56 AMAX: This is a new assault rifle that has high damage and accuracy, but low fire rate and mobility. It is suitable for medium to long-range engagements and can be customized with various attachments.
    • -
    • Shorty: This is a new shotgun that has high damage and mobility, but low range and magazine capacity. It is ideal for close-range combat and can be used as a secondary weapon or a backup weapon.
    • -
    • Echo Grenade: This is a new tactical grenade that emits a sonic pulse that reveals enemies' locations within a certain radius. It can be useful for scouting, flanking, or ambushing enemies.
    • -
    -

    The tips and tricks to improve your performance in COD Mobile Season 5

    -

    Finally, here are some tips and tricks that can help you improve your performance and enjoy COD Mobile Season 5 more:

    -
      -
    • Practice the new map and game modes in the training mode or the private matches before jumping into the public matches. This will help you familiarize yourself with the layout, the objectives, and the strategies of each mode.
    • -
    • Adjust your sensitivity and controls according to your preference and device. You can also use the gyroscope feature to aim more precisely by tilting your device.
    • -
    • Use the loadout system to customize your weapons, perks, skills, and operators according to your playstyle and the game mode. You can also save multiple loadouts for different situations.
    • -
    • Communicate and coordinate with your teammates using the voice chat or the quick chat feature. You can also join a clan or a group to play with like-minded players and earn rewards.
    • -
    • Complete the daily, weekly, seasonal, and event missions to earn XP, credits, CP, and other rewards. You can also purchase the battle pass to unlock more exclusive rewards.
    • -
    -

    Conclusion

    -

    COD Mobile Season 5: Tropical Vision is an amazing update that brings a lot of new content and features to the game. If you want to download and play it on your Android device, you can follow the steps mentioned above to install the APK and OBB files. You can also use the information and tips provided above to play it better and have more fun. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments below.

    -

    FAQs

    -

    Here are some frequently asked questions about COD Mobile Season 5: Tropical Vision:

    -
      -
    1. Q: Is COD Mobile Season 5 compatible with my device?
    2. -
    3. A: COD Mobile Season 5 requires Android 4.3 or higher and at least 2 GB of RAM to run smoothly. You can check your device's specifications in the Settings app.
    4. -
    5. Q: Is COD Mobile Season 5 safe to download and install?
    6. -
    7. A: COD Mobile Season 5 is safe to download and install as long as you use the links provided in this article or from other trusted sources. However, you should always be careful when downloading files from unknown sources and scan them for viruses or malware before installing them.
    8. -
    9. Q: How can I update COD Mobile Season 5 when a new patch is released?
    10. -
    11. A: You can update COD Mobile Season 5 by downloading the latest patch file from the same source as the APK and OBB files and following the same steps as above. Alternatively, you can wait for the official update to be available on the Google Play Store and update it from there.
    12. -
    13. Q: How can I fix COD Mobile Season 5 if it crashes or doesn't work properly?
    14. -
    15. A: You can try some of these solutions if you encounter any problems with COD Mobile Season 5:
    16. -
        -
      • Clear the cache and data of the app in the Settings app.
      • -
      • Restart your device and launch the app again.
      • -
      • Reinstall the app by deleting it and following the steps above again.
      • -
      • Contact the customer support of COD Mobile through their official website or social media channels.
      • -
      -
    17. Q: How can I get more CP or credits in COD Mobile Season 5?
    18. -
    19. A: You can get more CP or credits in COD Mobile Season 5 by doing the following:
    20. -
        -
      • Buy them with real money through the in-game store or the Google Play Store.
      • -
      • Earn them by completing missions, events, challenges, or achievements.
      • -
      • Claim them by logging in daily, watching ads, or participating in surveys.
      • -
      • Exchange them with other players through the in-game market or the COD Mobile community.
      • -
      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Farming Simulator 14 The Most Popular Farming Simulation Game for Android and iOS.md b/spaces/fatiXbelha/sd/Farming Simulator 14 The Most Popular Farming Simulation Game for Android and iOS.md deleted file mode 100644 index 631d03d01518ef0120eab8916aa5e0ca89ad0a73..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Farming Simulator 14 The Most Popular Farming Simulation Game for Android and iOS.md +++ /dev/null @@ -1,157 +0,0 @@ -
    -

    FS14 Game Download: How to Play Farming Simulator 14 on Your PC

    -

    Have you ever dreamed of becoming a farmer and managing your own farm? Do you want to experience the realistic and detailed simulation of agricultural activities? If yes, then you might want to try Farming Simulator 14, one of the most popular and successful farming games for mobile devices. But what if you want to play it on your PC instead of your phone or tablet? Don't worry, we have got you covered. In this article, we will show you how to download and install Farming Simulator 14 on your PC using different methods. We will also give you some tips and tricks to help you get started with your farming career. But first, let's take a look at what Farming Simulator 14 is all about.

    -

    What is Farming Simulator 14?

    -

    Farming Simulator 14 is a simulation game developed by GIANTS Software and released in 2013 for various mobile platforms such as Android, iOS, Windows Phone, and PlayStation Vita. It is the portable version of the PC game Farming Simulator 2013, with some changes and improvements. In Farming Simulator 14, you can start your agricultural career by taking control of your farm and its fields. You can plant, harvest, and sell various crops such as wheat, canola, corn, and more. You can also raise cows and sell their milk, or make money by selling grass or chaff at the biogas plant. You can also hire computer-controlled assistants to help you with your work, or play with a friend in the local multiplayer mode. You can also use a wide range of farm machines and equipment from real manufacturers such as Case IH, Deutz-Fahr, Lamborghini, Kuhn, Amazone, and Krone.

    -

    fs14 game download


    DOWNLOAD ►►► https://urllie.com/2uNBfd



    -

    Features of Farming Simulator 14

    -

    Some of the features that make Farming Simulator 14 an enjoyable and realistic farming game are:

    -
      -
    • New highly detailed 3D graphics and a slick user interface that enhance your gameplay experience.
    • -
    • A dynamic market where you can sell your crops at different prices depending on the demand and supply.
    • -
    • A variety of crops to grow and harvest, each with different characteristics and requirements.
    • -
    • A selection of animals to raise and feed, such as cows that produce milk that you can sell or use for your own consumption.
    • -
    • A biogas plant where you can sell grass or chaff to generate energy and earn money.
    • -
    • A large number of farm machines and equipment to control, all authentically modeled on real vehicles from famous brands.
    • -
    • A local multiplayer mode where you can play with a friend in a free-roaming open world using WiFi or Bluetooth.
    • -
    • An option to hire computer-controlled helpers to assist you with your tasks, such as driving, harvesting, or cultivating.
    • -
    -

    System Requirements for Farming Simulator 14

    -

    To play Farming Simulator 14 on your PC, you will need to meet the following minimum system requirements:

    - - -How to Download and Install Farming Simulator 14 on Your PC -

    Now that you know what Farming Simulator 14 is and what it offers, you might be wondering how to download and install it on your PC. There are two main methods that you can use to play Farming Simulator 14 on your PC: using an Android emulator or using the Microsoft Store. Let's take a look at each method and see how they work.

    -

    Using BlueStacks Emulator

    -

    One of the easiest and most popular ways to play Farming Simulator 14 on your PC is to use an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. One of the best Android emulators that you can use is BlueStacks, which is free, fast, and reliable. Here are the steps that you need to follow to download and install Farming Simulator 14 on your PC using BlueStacks:

    -
      -
    1. Download and install BlueStacks from its official website: https://www.bluestacks.com/
    2. -
    3. Launch BlueStacks and sign in with your Google account.
    4. -
    5. Go to the Google Play Store app on BlueStacks and search for Farming Simulator 14.
    6. -
    7. Select the game from the search results and click on the Install button.
    8. -
    9. Wait for the game to download and install on your PC.
    10. -
    11. Once the installation is complete, you can launch the game from the BlueStacks home screen or the app drawer.
    12. -
    -

    Congratulations, you have successfully downloaded and installed Farming Simulator 14 on your PC using BlueStacks. You can now enjoy playing the game with a larger screen, better graphics, and keyboard and mouse controls.

    -

    Using Microsoft Store

    -

    Another method that you can use to play Farming Simulator 14 on your PC is to use the Microsoft Store. The Microsoft Store is a digital distribution platform that allows you to download and install apps and games for Windows 10 devices. Farming Simulator 14 is available on the Microsoft Store for $2.99, but you can also try it for free for one hour before buying it. Here are the steps that you need to follow to download and install Farming Simulator 14 on your PC using the Microsoft Store:

    -
      -
    1. Go to the Microsoft Store app on your PC or visit its website: https://www.microsoft.com/en-us/store/apps/windows
    2. -
    3. Search for Farming Simulator 14 in the search bar.
    4. -
    5. Select the game from the search results and click on the Get button.
    6. -
    7. If you want to try the game for free, click on the Free Trial button. If you want to buy the game, click on the Buy button and enter your payment details.
    8. -
    9. Wait for the game to download and install on your PC.
    10. -
    11. Once the installation is complete, you can launch the game from the Start menu or the desktop shortcut.
    12. -
    -

    Congratulations, you have successfully downloaded and installed Farming Simulator 14 on your PC using the Microsoft Store. You can now enjoy playing the game with a larger screen, better graphics, and keyboard and mouse controls.

    -

    fs14 game download for pc
    -fs14 game download for android
    -fs14 game download apk
    -fs14 game download for windows 10
    -fs14 game download for laptop
    -fs14 game download free
    -fs14 game download uptodown
    -fs14 game download for ios
    -fs14 game download mod apk
    -fs14 game download for mac
    -fs14 game download for windows 8.1
    -fs14 game download play store
    -fs14 game download hack version
    -fs14 game download for iphone
    -fs14 game download for windows 7
    -fs14 game download microsoft store
    -fs14 game download bluestacks
    -fs14 game download unlimited money
    -fs14 game download for ipad
    -fs14 game download online
    -fs14 game download latest version
    -fs14 game download for chromebook
    -fs14 game download farming simulator 14
    -fs14 game download app store
    -fs14 game download with cheats
    -fs14 game download offline
    -fs14 game download new update
    -fs14 game download for pc windows 10
    -fs14 game download google play
    -fs14 game download full version
    -fs14 game download emulator
    -fs14 game download old version
    -fs14 game download pc free
    -fs14 game download android apk
    -fs14 game download windows store
    -fs14 game download cracked apk
    -fs14 game download for pc windows 7
    -fs14 game download without internet
    -fs14 game download with multiplayer mode
    -fs14 game download from apkpure
    -fs14 game download in laptop
    -fs14 game download on macbook air
    -fs14 game download by giants software
    -fs14 game download simulation management farming casual single player stylized offline data safety developers can show information here about how their app collects and uses your data learn more about data safety no information available ratings and reviews arrow_forward ratings and reviews are verified info_outline phone_android phone tablet_android tablet laptop chromebook tv tv 4.0 915k reviews 5 4 3 2 1 a google user more_vert december 17, 2019 this is a great app loads fast, almost no lag, and doesn't eat up the battery couple of little things i have noticed when using the helpers they leave gaps and get stuck a lot i know this is an older g (this is a very long keyword, but it is still related to the query)

    -

    Tips and Tricks for Farming Simulator 14

    -

    Farming Simulator 14 is a fun and realistic farming game, but it can also be challenging and complex at times. To help you get started with your farming career, here are some tips and tricks that you can use to make your life easier and more profitable:

    -

    How to Make Money Fast

    -

    One of the main goals of Farming Simulator 14 is to make money by selling your crops, milk, or biogas. However, making money can be slow and tedious at first, especially if you have a small farm and limited resources. Here are some ways that you can make money fast in Farming Simulator 14:

    -
      -
    • Sell your crops at the highest price possible. You can check the current market prices of different crops by tapping on the map icon on the top right corner of the screen. You can also see which crops are in high demand by looking at the green arrows next to their prices. Try to sell your crops when they are in high demand or when their prices are high.
    • -
    • Sell your milk at regular intervals. You can produce milk by feeding your cows with grass or hay. You can check how much milk you have by tapping on the cow icon on the top right corner of the screen. You can sell your milk by driving a milk tanker to the dairy factory. Try to sell your milk as often as possible, as the price of milk decreases over time.
    • -
    • Sell your biogas at the biogas plant. You can produce biogas by selling grass or chaff at the biogas plant. You can check how much biogas you have by tapping on the biogas icon on the top right corner of the screen. You can sell your biogas by driving a biogas tanker to the gas station. Try to sell your biogas when the price is high, as it fluctuates over time.
    • -
    • Complete missions for extra money. You can find missions by tapping on the exclamation mark icon on the top right corner of the screen. Missions are tasks that require you to perform certain actions, such as mowing grass, harvesting crops, or transporting goods. You will receive a reward for completing each mission, depending on the difficulty and duration of the task.
    • -
    -

    How to Hire and Manage Helpers

    -

    Another way to make your farming easier and more efficient is to hire and manage helpers. Helpers are computer-controlled workers that can perform certain tasks for you, such as driving, harvesting, or cultivating. You can hire helpers by tapping on the helper icon on the bottom right corner of the screen. You can also assign helpers to specific vehicles or equipment by tapping on the vehicle icon and then tapping on the helper icon. Here are some tips on how to hire and manage helpers in Farming Simulator 14:

    -
      -
    • Hire helpers only when you need them. Helpers cost money per hour, so you don't want to hire them unnecessarily. Hire helpers only when you have a task that you can't do yourself or when you want to save time and effort.
    • -
    • Manage your helpers wisely. You can check the status of your helpers by tapping on the helper icon on the bottom right corner of the screen. You can see how much money they are costing you, how much fuel they are using, and what task they are performing. You can also dismiss them by tapping on the red X button next to their name. Manage your helpers wisely by dismissing them when they are done with their task or when they are costing you too much money or fuel.
    • -
    • Upgrade your vehicles and equipment for better performance. You can upgrade your vehicles and equipment by tapping on the shop icon on the bottom left corner of the screen. You can buy new vehicles and equipment or sell your old ones. You can also customize your vehicles and equipment by changing their color, adding attachments, or increasing their capacity. Upgrading your vehicles and equipment will improve their performance and efficiency, which will also benefit your helpers.
    • -
    -

    How to Expand Your Farm

    -

    One of the most satisfying aspects of Farming Simulator 14 is to expand your farm and grow your business. You can expand your farm by buying new fields, buildings, animals, or vehicles and equipment. You can also improve your farm by cultivating your fields, fertilizing your crops, or feeding your animals. Here are some ways that you can expand your farm in Farming Simulator 14:

    -
      -
    • Buy new fields to grow more crops. You can buy new fields by tapping on the map icon on the top right corner of the screen and then tapping on the field icon. You can see the size and price of each field, as well as its current crop type and growth stage. You can buy any field that is not owned by another farmer, as long as you have enough money.
    • -
    • Buy new buildings to store more goods or produce more products. You can buy new buildings by tapping on the shop icon on the bottom left corner of the screen and then tapping on the building icon. You can see the function and price of each building, as well as its capacity and production rate. You can buy any building that is not already present on your farm, as long as you have enough money and space.
    • -
    • Buy new animals to raise more livestock or produce more milk. You can buy new animals by tapping on the shop icon on the bottom left corner of the screen and then tapping on the animal icon. You can see the type and price of each animal, as well as its health and productivity. You can buy any animal that is not already present on your farm, as long as you have enough money and space.
    • -
    • Buy new vehicles and equipment to perform more tasks or improve your efficiency. You can buy new vehicles and equipment by tapping on the shop icon on the bottom left corner of the screen and then tapping on the vehicle or equipment icon. You can see the brand and price of each vehicle or equipment, as well as its function and specifications. You can buy any vehicle or equipment that is not already present on your farm, as long as you have enough money and space.
    • -
    -

    Farming Simulator 14 Review

    -

    Now that you know how to play Farming Simulator 14 on your PC and how to expand your farm, you might be wondering what other people think about the game. Is it worth playing? What are its strengths and weaknesses? To answer these questions, we have gathered some pros and cons of Farming Simulator 14, as well as some user ratings and feedback from different sources. Here is our Farming Simulator 14 review:

    -

    Pros and Cons of Farming Simulator 14

    -

    Some of the pros and cons of Farming Simulator 14 are:

    -
    Operating SystemProcessorMemoryGraphics
    Windows 7/8/10 (64-bit)Intel Core i3-2100T @ 2.5GHz or AMD FX-4100 @3.6 GHz4 GB RAMNvidia Geforce GTX 650, AMD Radeon HD 7770 graphics card or better (min. 2 GB VRAM, DX11 support)
    - - - - - - -
    ProsCons
    It offers a realistic and detailed simulation of farming activities.It can be repetitive and boring at times.
    It has a large and open world to explore and customize.It has limited graphics and sound effects.
    It has a variety of crops, animals, vehicles, and equipment to choose from.It has a steep learning curve and a complex user interface.
    It has a dynamic market and a biogas plant to add more challenge and strategy.It has some bugs and glitches that can affect the gameplay.
    It has a local multiplayer mode to play with a friend.It does not have an online multiplayer mode or a modding community.
    -

    User Ratings and Feedback

    -

    Some of the user ratings and feedback for Farming Simulator 14 are:

    -
      -
    • On Google Play Store, the game has a rating of 4.4 out of 5 stars, based on over 1 million reviews. Most users praise the game for its realism, variety, and fun factor. Some users complain about the game's performance, controls, and lack of updates.
    • -
    • On App Store, the game has a rating of 4.6 out of 5 stars, based on over 8 thousand reviews. Most users appreciate the game for its graphics, gameplay, and features. Some users criticize the game for its price, difficulty, and bugs.
    • -
    • On Microsoft Store, the game has a rating of 4 out of 5 stars, based on over 300 reviews. Most users enjoy the game for its simulation, customization, and multiplayer mode. Some users dislike the game for its graphics, sound, and glitches.
    • -
    -

    Conclusion

    -

    Farming Simulator 14 is a simulation game that allows you to experience the realistic and detailed simulation of agricultural activities. You can start your agricultural career by taking control of your farm and its fields. You can plant, harvest, and sell various crops such as wheat, canola, corn, and more. You can also raise cows and sell their milk, or make money by selling grass or chaff at the biogas plant. You can also hire computer-controlled assistants to help you with your work, or play with a friend in the local multiplayer mode. You can also use a wide range of farm machines and equipment from real manufacturers such as Case IH, Deutz-Fahr, Lamborghini, Kuhn, Amazone, and Krone.

    -

    You can play Farming Simulator 14 on your PC using different methods such as using an Android emulator or using the Microsoft Store. You can also expand your farm by buying new fields, buildings, animals, or vehicles and equipment. You can also improve your farm by cultivating your fields, fertilizing your crops, or feeding your animals. You can also make money fast by selling your crops, milk, or biogas at the highest price possible. You can also hire and manage helpers to assist you with your tasks. You can also enjoy playing Farming Simulator 14 with a larger screen, better graphics, and keyboard and mouse controls.

    -

    Farming Simulator 14 is a fun and realistic farming game, but it also has some drawbacks and limitations. It can be repetitive and boring at times, especially if you have a small farm and limited resources. It also has limited graphics and sound effects, which can affect the immersion and realism of the game. It also has a steep learning curve and a complex user interface, which can make the game hard to understand and play for beginners. It also has some bugs and glitches that can affect the gameplay and cause frustration. It also does not have an online multiplayer mode or a modding community, which can limit the replay value and the creativity of the game.

    -

    Overall, Farming Simulator 14 is a game that can appeal to fans of farming and simulation games, as well as to casual gamers who want to try something different and relaxing. It offers a realistic and detailed simulation of farming activities, with a large and open world to explore and customize, a variety of crops, animals, vehicles, and equipment to choose from, a dynamic market and a biogas plant to add more challenge and strategy, and a local multiplayer mode to play with a friend. It also allows you to play the game on your PC using different methods, such as using an Android emulator or using the Microsoft Store. It also allows you to expand your farm by buying new fields, buildings, animals, or vehicles and equipment, as well as to improve your farm by cultivating your fields, fertilizing your crops, or feeding your animals. It also allows you to make money fast by selling your crops, milk, or biogas at the highest price possible, as well as to hire and manage helpers to assist you with your tasks. It also allows you to enjoy playing the game with a larger screen, better graphics, and keyboard and mouse controls.

    -

    However, Farming Simulator 14 also has some drawbacks and limitations that can affect your enjoyment of the game. It can be repetitive and boring at times, especially if you have a small farm and limited resources. It also has limited graphics and sound effects, which can affect the immersion and realism of the game. It also has a steep learning curve and a complex user interface, which can make the game hard to understand and play for beginners. It also has some bugs and glitches that can affect the gameplay and cause frustration. It also does not have an online multiplayer mode or a modding community, which can limit the replay value and the creativity of the game.

    -

    Therefore, we recommend Farming Simulator 14 to anyone who is interested in farming and simulation games, or who wants to try something different and relaxing. However, we also advise you to be aware of its drawbacks and limitations before playing it. We hope that this article has helped you learn more about Farming Simulator 14 and how to play it on your PC. We hope that you have fun playing it and expanding your farm.

    -

    FAQs

    -

    Here are some frequently asked questions about Farming Simulator 14:

    -
      -
    • Q: How do I save my progress in Farming Simulator 14?
    • -
    • A: You can save your progress in Farming Simulator 14 by tapping on the menu icon on the top left corner of the screen and then tapping on the save icon. You can also enable the auto-save option in the settings menu.
    • -
    • Q: How do I change the camera view in Farming Simulator 14?
    • -
    • A: You can change the camera view in Farming Simulator 14 by tapping on the camera icon on the bottom right corner of the screen. You can choose between three different camera views: behind-the-vehicle view, inside-the-vehicle view, or free-roaming view.
    • -
    • Q: How do I refill my vehicles or equipment in Farming Simulator 14?
    • -
    • A: You can refill your vehicles or equipment in Farming Simulator 14 by driving them to the gas station or the shop. You can also buy fuel tanks or seed pallets from the shop and place them on your farm for easier access.
    • -
    • Q: How do I reset my vehicles or equipment in Farming Simulator 14?
    • -
    • A: You can reset your vehicles or equipment in Farming Simulator 14 by tapping on the map icon on the top right corner of the screen and then tapping on the vehicle or equipment icon. You can then tap on the reset button to return them to their original position.
    • -
    • Q: How do I sell my vehicles or equipment in Farming Simulator 14?
    • -
    • A: You can sell your vehicles or equipment in Farming Simulator 14 by tapping on the shop icon on the bottom left corner of the screen and then tapping on the vehicle or equipment icon. You can then tap on the sell button to sell them for a fraction of their original price.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/test_audio2coeff.py b/spaces/fb700/chatglm-fitness-RLHF/src/test_audio2coeff.py deleted file mode 100644 index bbf19f494e2127b4ae9d6074b172fddb694d6e34..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/test_audio2coeff.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import torch -import numpy as np -from scipy.io import savemat, loadmat -from yacs.config import CfgNode as CN -from scipy.signal import savgol_filter - -import safetensors -import safetensors.torch - -from src.audio2pose_models.audio2pose import Audio2Pose -from src.audio2exp_models.networks import SimpleWrapperV2 -from src.audio2exp_models.audio2exp import Audio2Exp -from src.utils.safetensor_helper import load_x_from_safetensor - -def load_cpk(checkpoint_path, model=None, optimizer=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if model is not None: - model.load_state_dict(checkpoint['model']) - if optimizer is not None: - optimizer.load_state_dict(checkpoint['optimizer']) - - return checkpoint['epoch'] - -class Audio2Coeff(): - - def __init__(self, sadtalker_path, device): - #load config - fcfg_pose = open(sadtalker_path['audio2pose_yaml_path']) - cfg_pose = CN.load_cfg(fcfg_pose) - cfg_pose.freeze() - fcfg_exp = open(sadtalker_path['audio2exp_yaml_path']) - cfg_exp = CN.load_cfg(fcfg_exp) - cfg_exp.freeze() - - # load audio2pose_model - self.audio2pose_model = Audio2Pose(cfg_pose, None, device=device) - self.audio2pose_model = self.audio2pose_model.to(device) - self.audio2pose_model.eval() - for param in self.audio2pose_model.parameters(): - param.requires_grad = False - - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - self.audio2pose_model.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2pose')) - else: - load_cpk(sadtalker_path['audio2pose_checkpoint'], model=self.audio2pose_model, device=device) - except: - raise Exception("Failed in loading audio2pose_checkpoint") - - # load audio2exp_model - netG = SimpleWrapperV2() - netG = netG.to(device) - for param in netG.parameters(): - netG.requires_grad = False - netG.eval() - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - netG.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2exp')) - else: - load_cpk(sadtalker_path['audio2exp_checkpoint'], model=netG, device=device) - except: - raise Exception("Failed in loading audio2exp_checkpoint") - self.audio2exp_model = Audio2Exp(netG, cfg_exp, device=device, prepare_training_loss=False) - self.audio2exp_model = self.audio2exp_model.to(device) - for param in self.audio2exp_model.parameters(): - param.requires_grad = False - self.audio2exp_model.eval() - - self.device = device - - def generate(self, batch, coeff_save_dir, pose_style, ref_pose_coeff_path=None): - - with torch.no_grad(): - #test - results_dict_exp= self.audio2exp_model.test(batch) - exp_pred = results_dict_exp['exp_coeff_pred'] #bs T 64 - - #for class_id in range(1): - #class_id = 0#(i+10)%45 - #class_id = random.randint(0,46) #46 styles can be selected - batch['class'] = torch.LongTensor([pose_style]).to(self.device) - results_dict_pose = self.audio2pose_model.test(batch) - pose_pred = results_dict_pose['pose_pred'] #bs T 6 - - pose_len = pose_pred.shape[1] - if pose_len<13: - pose_len = int((pose_len-1)/2)*2+1 - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), pose_len, 2, axis=1)).to(self.device) - else: - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), 13, 2, axis=1)).to(self.device) - - coeffs_pred = torch.cat((exp_pred, pose_pred), dim=-1) #bs T 70 - - coeffs_pred_numpy = coeffs_pred[0].clone().detach().cpu().numpy() - - if ref_pose_coeff_path is not None: - coeffs_pred_numpy = self.using_refpose(coeffs_pred_numpy, ref_pose_coeff_path) - - savemat(os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])), - {'coeff_3dmm': coeffs_pred_numpy}) - - return os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])) - - def using_refpose(self, coeffs_pred_numpy, ref_pose_coeff_path): - num_frames = coeffs_pred_numpy.shape[0] - refpose_coeff_dict = loadmat(ref_pose_coeff_path) - refpose_coeff = refpose_coeff_dict['coeff_3dmm'][:,64:70] - refpose_num_frames = refpose_coeff.shape[0] - if refpose_num_frames -

    Download Be a Pro Football Uptodown: A 3D Soccer Game for Android

    -

    If you are a fan of soccer games, you might want to check out Be a Pro Football Uptodown, a 3D soccer game for Android devices that lets you play full matches against AI or online opponents. In this article, we will tell you what Be a Pro Football is, how to download it from Uptodown, and why you should give it a try.

    -

    download be a pro football uptodown


    Download File ->>->>->> https://gohhs.com/2uPnYl



    -

    What is Be a Pro Football?

    -

    Be a Pro Football is a soccer game developed by Be A Pro Games, a studio that specializes in sports games. It is inspired by popular soccer games like FIFA or PES, but with some unique features and advantages. Be a Pro Football allows you to create your own team and sign world-famous stars like Messi, Ronaldo, Neymar, or Mbappe. You can also customize your players' appearance, skills, and attributes. You can play in different modes, such as friendly matches, tournaments, leagues, or online matches against other players from around the world. You can also choose from various stadiums and weather conditions to make your matches more realistic and challenging.

    -

    Features of Be a Pro Football

    -

    Realistic graphics and animations

    -

    One of the main attractions of Be a Pro Football is its realistic graphics and animations. The game uses 3D models and motion capture technology to create lifelike players and movements. The game also has realistic sound effects and commentary to enhance the atmosphere of the matches. You will feel like you are watching a real soccer game on your Android device.

    -

    Customizable teams and players

    -

    Another feature that sets Be a Pro Football apart from other soccer games is its high level of customization. You can create your own team and choose its name, logo, kit, and formation. You can also sign any player you want from the game's database, which includes over 3000 players from different leagues and countries. You can also edit your players' appearance, skills, and attributes to suit your preferences and strategy. You can make your team as strong or as weak as you want.

    -

    Online and offline modes

    -

    Be a Pro Football offers both online and offline modes for you to enjoy. You can play offline against the AI in friendly matches, tournaments, or leagues. You can also play online against other players from around the world in real-time matches. You can challenge your friends or random opponents and show off your skills and tactics. You can also chat with other players and make new friends.

    -

    Various stadiums and weather conditions

    -

    Be a Pro Football also lets you choose from different stadiums and weather conditions to make your matches more varied and fun. You can play in famous stadiums like Camp Nou, Old Trafford, or Santiago Bernabeu. You can also play in different weather conditions like sunny, cloudy, rainy, or snowy. Each stadium and weather condition has its own effect on the gameplay and difficulty of the matches.

    -

    How to download be a pro football game for android
    -Be a pro football apk download free from uptodown
    -Download be a pro football mod apk with unlimited money
    -Be a pro football online multiplayer mode download
    -Download be a pro football latest version for android
    -Be a pro football tips and tricks for beginners
    -Download be a pro football offline mode without internet
    -Be a pro football best players and teams to sign
    -Download be a pro football hack apk with cheats
    -Be a pro football review and rating on uptodown
    -Download be a pro football for pc using emulator
    -Be a pro football gameplay and features overview
    -Download be a pro football for ios from uptodown
    -Be a pro football comparison with fifa and pes
    -Download be a pro football old versions from uptodown
    -Be a pro football tournaments and leagues to join
    -Download be a pro football for windows phone from uptodown
    -Be a pro football customization and settings options
    -Download be a pro football from uptodown alternative sites
    -Be a pro football updates and patches download
    -Download be a pro football for mac from uptodown
    -Be a pro football system requirements and compatibility
    -Download be a pro football from uptodown without ads
    -Be a pro football achievements and rewards to unlock
    -Download be a pro football from uptodown with qr code
    -Be a pro football bugs and errors fix download
    -Download be a pro football from uptodown with vpn
    -Be a pro football best tactics and formations to use
    -Download be a pro football from uptodown in different languages
    -Be a pro football fan community and forum download

    -

    How to download Be a Pro Football Uptodown?

    -

    Requirements and compatibility

    -

    To download Be a Pro Football Uptodown, you need an Android device that meets the following requirements:

    -
      -
    • Android version: 5.0 or higher
    • -
    • RAM: 2 GB or more
    • -
    • Storage space: 500 MB or more
    • -
    • Internet connection: stable and fast

    • -
    -

    Be a Pro Football Uptodown is compatible with most Android devices, but some older or low-end devices may experience lag or crashes. You can check the compatibility of your device on the Uptodown website before downloading the game.

    -

    Steps to download and install the APK file

    -

    To download Be a Pro Football Uptodown, you need to follow these steps:

    -
      -
    1. Go to the Uptodown website and search for Be a Pro Football.
    2. -
    3. Select the game from the search results and click on the download button.
    4. -
    5. Wait for the APK file to download to your device. You may need to enable the installation of apps from unknown sources in your device settings.
    6. -
    7. Once the download is complete, open the APK file and follow the instructions to install the game.
    8. -
    9. Launch the game and enjoy playing Be a Pro Football on your Android device.
    10. -
    -

    Tips to avoid malware and viruses

    -

    Downloading apps from third-party sources like Uptodown can be risky, as some apps may contain malware or viruses that can harm your device or steal your data. To avoid this, you should follow these tips:

    -
      -
    • Only download apps from trusted and reputable sources like Uptodown, which scan and verify all the apps they offer.
    • -
    • Check the user reviews and ratings of the apps you want to download, and avoid apps that have low ratings or negative feedback.
    • -
    • Use a reliable antivirus or security app on your device to scan and protect your device from any potential threats.
    • -
    • Do not grant unnecessary permissions or access to the apps you download, and delete any apps that you do not use or trust.
    • -
    -

    Why download Be a Pro Football Uptodown?

    -

    Advantages of downloading from Uptodown

    -

    There are many advantages of downloading Be a Pro Football Uptodown instead of from other sources. Some of them are:

    -
      -
    • You can download the latest version of the game without any restrictions or limitations.
    • -
    • You can download the game for free, without any in-app purchases or ads.
    • -
    • You can download the game without needing a Google account or Google Play Services.
    • -
    • You can download the game in any region or country, regardless of any geo-blocking or censorship.
    • -
    • You can download the game in any language you prefer, as Uptodown offers multilingual support for its apps.
    • -
    -

    User reviews and ratings

    -

    Be a Pro Football Uptodown has received positive reviews and ratings from its users. The game has an average rating of 4.5 out of 5 stars on Uptodown, based on over 1000 reviews. Here are some of the user comments:

    - - - - - - - -
    User nameRatingComment
    Alex5 starsThis game is awesome. It has great graphics and gameplay. I love playing online with my friends. It is better than FIFA or PES.
    Lisa4 starsI like this game a lot. It is fun and easy to play. I wish there were more teams and players to choose from, though.
    Mohamed5 starsThis game is amazing. It is very realistic and challenging. I enjoy playing in different stadiums and weather conditions. It is the best soccer game for Android.
    Sara3 starsThis game is good, but it has some problems. It sometimes crashes or freezes on my device. It also consumes a lot of battery and data. It needs some improvements.
    Juan4 starsThis game is cool. It has nice graphics and sounds. I like customizing my team and players. It is fun to play offline or online.
    -

    Alternatives to Be a Pro Football Uptodown

    -

    If you are looking for other soccer games for Android, you can also try these alternatives to Be a Pro Football Uptodown:

    - - - - - - -
    NameDescription
    Dream League Soccer 2023A popular soccer game that lets you build your dream team and compete in various modes and tournaments.FIFA SoccerA official soccer game from EA Sports that features licensed teams, players, and leagues from around the world.
    eFootball PES 2023A realistic soccer game from Konami that offers high-quality graphics, gameplay, and modes.
    Score! HeroA casual soccer game that lets you play as a rising star and score goals in various scenarios and challenges.
    Real FootballA simple soccer game that lets you play in different modes and improve your skills and tactics.
    -

    Conclusion

    -

    Be a Pro Football Uptodown is a 3D soccer game for Android devices that offers realistic graphics, animations, sound effects, and commentary. You can create your own team and customize your players' appearance, skills, and attributes. You can play in different modes, such as friendly matches, tournaments, leagues, or online matches against other players from around the world. You can also choose from various stadiums and weather conditions to make your matches more realistic and challenging.

    -

    To download Be a Pro Football Uptodown, you need an Android device that meets the requirements and compatibility of the game. You can download the APK file from the Uptodown website and install it on your device. You should also follow some tips to avoid malware and viruses when downloading apps from third-party sources.

    -

    Downloading Be a Pro Football Uptodown has many advantages over other sources. You can download the latest version of the game without any restrictions or limitations. You can download the game for free, without any in-app purchases or ads. You can download the game without needing a Google account or Google Play Services. You can download the game in any region or country, regardless of any geo-blocking or censorship. You can download the game in any language you prefer, as Uptodown offers multilingual support for its apps.

    -

    Be a Pro Football Uptodown has received positive reviews and ratings from its users. The game has an average rating of 4.5 out of 5 stars on Uptodown, based on over 1000 reviews. The game is praised for its graphics, gameplay, customization, and online mode. Some users have reported some issues with the game, such as crashes, freezes, battery consumption, and data usage. The game also has some alternatives that you can try if you are looking for other soccer games for Android.

    -

    If you are a fan of soccer games, you should definitely download Be a Pro Football Uptodown and enjoy playing full matches against AI or online opponents. Be a Pro Football Uptodown is one of the best soccer games for Android devices that lets you experience the thrill and excitement of soccer on your device.

    -

    FAQs

    -

    What is the size of Be a Pro Football Uptodown?

    -

    The size of Be a Pro Football Uptodown is about 500 MB. You need to have enough storage space on your device to download and install the game.

    -

    How can I update Be a Pro Football Uptodown?

    -

    You can update Be a Pro Football Uptodown by visiting the Uptodown website and downloading the latest version of the game. You can also enable automatic updates on your device settings to get notified when there is a new version available.

    -

    How can I contact the developers of Be a Pro Football?

    -

    You can contact the developers of Be a Pro Football by sending them an email at beapro.games@gmail.com. You can also follow them on their social media accounts on Facebook, Twitter, Instagram, and YouTube.

    -

    How can I play Be a Pro Football Uptodown on PC?

    -

    You can play Be a Pro Football Uptodown on PC by using an Android emulator like BlueStacks or NoxPlayer. You need to download and install the emulator on your PC and then download and install the APK file of Be a Pro Football Uptodown from the Uptodown website.

    -

    Is Be a Pro Football Uptodown safe to download?

    -

    Yes, Be a Pro Football Uptodown is safe to download from Uptodown, as they scan and verify all the apps they offer. However, you should always be careful when downloading apps from third-party sources and follow some tips to avoid malware and viruses.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/DragGAN/stylegan2/model.py b/spaces/fffiloni/DragGAN/stylegan2/model.py deleted file mode 100644 index 71dc94be673ae747ab39ca910d77db27f4c5a92f..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/DragGAN/stylegan2/model.py +++ /dev/null @@ -1,696 +0,0 @@ -import math -import random - -import torch -from torch import nn -from torch.nn import functional as F - -from .op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/fffiloni/audio-to-spectrogram/app.py b/spaces/fffiloni/audio-to-spectrogram/app.py deleted file mode 100644 index ba95a14f05acff6415f34f5e7d5045e6c5b6ee1f..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audio-to-spectrogram/app.py +++ /dev/null @@ -1,151 +0,0 @@ -import gradio as gr - -import io - -import numpy as np -from PIL import Image -import pydub -from scipy.io import wavfile -import torch -import torchaudio -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("-i", "--input", help="Input file to process, anything that FFMPEG supports, but wav and mp3 are recommended") -parser.add_argument("-o", "--output", help="Output Image") -parser.add_argument("-m", "--maxvol", default=100, help="Max Volume, 255 for identical results") -parser.add_argument("-p", "--powerforimage", default=0.25, help="Power for Image") -parser.add_argument("-n", "--nmels", default=512, help="n_mels to use for Image, basically width. Higher = more fidelity") -args = parser.parse_args() - -def spectrogram_image_from_wav(wav_bytes: io.BytesIO, max_volume: float = 50, power_for_image: float = 0.25, ms_duration: int = 5119) -> Image.Image: - """ - Generate a spectrogram image from a WAV file. - """ - # Read WAV file from bytes - sample_rate, waveform = wavfile.read(wav_bytes) - - #sample_rate = 44100 # [Hz] - clip_duration_ms = ms_duration # [ms] - - bins_per_image = 512 - n_mels = int(args.nmels) - mel_scale = True - - # FFT parameters - window_duration_ms = 100 # [ms] - padded_duration_ms = 400 # [ms] - step_size_ms = 10 # [ms] - - # Derived parameters - num_samples = int(512 / float(bins_per_image) * clip_duration_ms) * sample_rate - n_fft = int(padded_duration_ms / 1000.0 * sample_rate) - hop_length = int(step_size_ms / 1000.0 * sample_rate) - win_length = int(window_duration_ms / 1000.0 * sample_rate) - - # Compute spectrogram from waveform - Sxx = spectrogram_from_waveform( - waveform=waveform, - sample_rate=sample_rate, - n_fft=n_fft, - hop_length=hop_length, - win_length=win_length, - mel_scale=mel_scale, - n_mels=n_mels, - ) - - # Convert spectrogram to image - image = image_from_spectrogram(Sxx, max_volume=max_volume, power_for_image=power_for_image) - - return image - -def spectrogram_from_waveform( - waveform: np.ndarray, - sample_rate: int, - n_fft: int, - hop_length: int, - win_length: int, - mel_scale: bool = True, - n_mels: int = 512, -) -> np.ndarray: - """ - Compute a spectrogram from a waveform. - """ - - spectrogram_func = torchaudio.transforms.Spectrogram( - n_fft=n_fft, - power=None, - hop_length=hop_length, - win_length=win_length, - ) - - waveform_tensor = torch.from_numpy(waveform.astype(np.float32)).reshape(1, -1) - Sxx_complex = spectrogram_func(waveform_tensor).numpy()[0] - - Sxx_mag = np.abs(Sxx_complex) - - if mel_scale: - mel_scaler = torchaudio.transforms.MelScale( - n_mels=n_mels, - sample_rate=sample_rate, - f_min=0, - f_max=10000, - n_stft=n_fft // 2 + 1, - norm=None, - mel_scale="htk", - ) - - Sxx_mag = mel_scaler(torch.from_numpy(Sxx_mag)).numpy() - - return Sxx_mag - -def image_from_spectrogram( - data: np.ndarray, - max_volume: float = 50, - power_for_image: float = 0.25 -) -> Image.Image: - data = np.power(data, power_for_image) - data = data / (max_volume / 255) - data = 255 - data - data = data[::-1] - image = Image.fromarray(data.astype(np.uint8)) - return image - -def spectrogram_image_from_file(filename, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image: - """ - Generate a spectrogram image from an MP3 file. - """ - - max_volume = int(max_volume) - power_for_image = float(args.powerforimage) - - # Load MP3 file into AudioSegment object - audio = pydub.AudioSegment.from_file(filename) - - # Convert to mono and set frame rate - audio = audio.set_channels(1) - audio = audio.set_frame_rate(44100) - - length_in_ms = len(audio) - print("ORIGINAL AUDIO LENGTH IN MS:", length_in_ms) - # Extract first 5 seconds of audio data - audio = audio[:5119] - length_in_ms = len(audio) - print("CROPPED AUDIO LENGTH IN MS:", length_in_ms) - - # Convert to WAV and save as BytesIO object - wav_bytes = io.BytesIO() - audio.export("clip.wav", format="wav") - audio.export(wav_bytes, format="wav") - wav_bytes.seek(0) - - # Generate spectrogram image from WAV file - return spectrogram_image_from_wav(wav_bytes, max_volume=max_volume, power_for_image=power_for_image, ms_duration=length_in_ms) - -def convert(audio): - - image = spectrogram_image_from_file(audio, 50) - - return image - -gr.Interface(fn=convert, inputs=[gr.Audio(source="upload", type="filepath")], outputs=[gr.Image()]).launch() \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/trace_events.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/trace_events.d.ts deleted file mode 100644 index d47aa9311ec85754ce71d1ee64c8b3bb9f509b20..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/trace_events.d.ts +++ /dev/null @@ -1,171 +0,0 @@ -/** - * The `trace_events` module provides a mechanism to centralize tracing information - * generated by V8, Node.js core, and userspace code. - * - * Tracing can be enabled with the `--trace-event-categories` command-line flag - * or by using the `trace_events` module. The `--trace-event-categories` flag - * accepts a list of comma-separated category names. - * - * The available categories are: - * - * * `node`: An empty placeholder. - * * `node.async_hooks`: Enables capture of detailed `async_hooks` trace data. - * The `async_hooks` events have a unique `asyncId` and a special `triggerId` `triggerAsyncId` property. - * * `node.bootstrap`: Enables capture of Node.js bootstrap milestones. - * * `node.console`: Enables capture of `console.time()` and `console.count()`output. - * * `node.dns.native`: Enables capture of trace data for DNS queries. - * * `node.environment`: Enables capture of Node.js Environment milestones. - * * `node.fs.sync`: Enables capture of trace data for file system sync methods. - * * `node.perf`: Enables capture of `Performance API` measurements. - * * `node.perf.usertiming`: Enables capture of only Performance API User Timing - * measures and marks. - * * `node.perf.timerify`: Enables capture of only Performance API timerify - * measurements. - * * `node.promises.rejections`: Enables capture of trace data tracking the number - * of unhandled Promise rejections and handled-after-rejections. - * * `node.vm.script`: Enables capture of trace data for the `vm` module's`runInNewContext()`, `runInContext()`, and `runInThisContext()` methods. - * * `v8`: The `V8` events are GC, compiling, and execution related. - * - * By default the `node`, `node.async_hooks`, and `v8` categories are enabled. - * - * ```bash - * node --trace-event-categories v8,node,node.async_hooks server.js - * ``` - * - * Prior versions of Node.js required the use of the `--trace-events-enabled`flag to enable trace events. This requirement has been removed. However, the`--trace-events-enabled` flag _may_ still be - * used and will enable the`node`, `node.async_hooks`, and `v8` trace event categories by default. - * - * ```bash - * node --trace-events-enabled - * - * # is equivalent to - * - * node --trace-event-categories v8,node,node.async_hooks - * ``` - * - * Alternatively, trace events may be enabled using the `trace_events` module: - * - * ```js - * const trace_events = require('trace_events'); - * const tracing = trace_events.createTracing({ categories: ['node.perf'] }); - * tracing.enable(); // Enable trace event capture for the 'node.perf' category - * - * // do work - * - * tracing.disable(); // Disable trace event capture for the 'node.perf' category - * ``` - * - * Running Node.js with tracing enabled will produce log files that can be opened - * in the [`chrome://tracing`](https://www.chromium.org/developers/how-tos/trace-event-profiling-tool) tab of Chrome. - * - * The logging file is by default called `node_trace.${rotation}.log`, where`${rotation}` is an incrementing log-rotation id. The filepath pattern can - * be specified with `--trace-event-file-pattern` that accepts a template - * string that supports `${rotation}` and `${pid}`: - * - * ```bash - * node --trace-event-categories v8 --trace-event-file-pattern '${pid}-${rotation}.log' server.js - * ``` - * - * To guarantee that the log file is properly generated after signal events like`SIGINT`, `SIGTERM`, or `SIGBREAK`, make sure to have the appropriate handlers - * in your code, such as: - * - * ```js - * process.on('SIGINT', function onSigint() { - * console.info('Received SIGINT.'); - * process.exit(130); // Or applicable exit code depending on OS and signal - * }); - * ``` - * - * The tracing system uses the same time source - * as the one used by `process.hrtime()`. - * However the trace-event timestamps are expressed in microseconds, - * unlike `process.hrtime()` which returns nanoseconds. - * - * The features from this module are not available in `Worker` threads. - * @experimental - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/trace_events.js) - */ -declare module 'trace_events' { - /** - * The `Tracing` object is used to enable or disable tracing for sets of - * categories. Instances are created using the - * `trace_events.createTracing()` method. - * - * When created, the `Tracing` object is disabled. Calling the - * `tracing.enable()` method adds the categories to the set of enabled trace - * event categories. Calling `tracing.disable()` will remove the categories - * from the set of enabled trace event categories. - */ - interface Tracing { - /** - * A comma-separated list of the trace event categories covered by this - * `Tracing` object. - */ - readonly categories: string; - /** - * Disables this `Tracing` object. - * - * Only trace event categories _not_ covered by other enabled `Tracing` - * objects and _not_ specified by the `--trace-event-categories` flag - * will be disabled. - */ - disable(): void; - /** - * Enables this `Tracing` object for the set of categories covered by - * the `Tracing` object. - */ - enable(): void; - /** - * `true` only if the `Tracing` object has been enabled. - */ - readonly enabled: boolean; - } - interface CreateTracingOptions { - /** - * An array of trace category names. Values included in the array are - * coerced to a string when possible. An error will be thrown if the - * value cannot be coerced. - */ - categories: string[]; - } - /** - * Creates and returns a `Tracing` object for the given set of `categories`. - * - * ```js - * const trace_events = require('trace_events'); - * const categories = ['node.perf', 'node.async_hooks']; - * const tracing = trace_events.createTracing({ categories }); - * tracing.enable(); - * // do stuff - * tracing.disable(); - * ``` - * @since v10.0.0 - * @return . - */ - function createTracing(options: CreateTracingOptions): Tracing; - /** - * Returns a comma-separated list of all currently-enabled trace event - * categories. The current set of enabled trace event categories is determined - * by the _union_ of all currently-enabled `Tracing` objects and any categories - * enabled using the `--trace-event-categories` flag. - * - * Given the file `test.js` below, the command`node --trace-event-categories node.perf test.js` will print`'node.async_hooks,node.perf'` to the console. - * - * ```js - * const trace_events = require('trace_events'); - * const t1 = trace_events.createTracing({ categories: ['node.async_hooks'] }); - * const t2 = trace_events.createTracing({ categories: ['node.perf'] }); - * const t3 = trace_events.createTracing({ categories: ['v8'] }); - * - * t1.enable(); - * t2.enable(); - * - * console.log(trace_events.getEnabledCategories()); - * ``` - * @since v10.0.0 - */ - function getEnabledCategories(): string | undefined; -} -declare module 'node:trace_events' { - export * from 'trace_events'; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/public/OrientedCursor.js b/spaces/fffiloni/controlnet-animation-doodle/public/OrientedCursor.js deleted file mode 100644 index 7ed4b78bd4c2c7c05cdc6b43fa7a300344b26e06..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/public/OrientedCursor.js +++ /dev/null @@ -1,164 +0,0 @@ -class OrientedCursor{ - - constructor(elementID){ - - this.elementID = elementID; - this.tiltX = 0; - this.tiltY = 0; - this.pressure = 0; - this.diameter = 0; - - this.targetAngle = 0; - - this.isOnCanvas = false; - } - - // ----------------------------------------- - // ----------------------------------------- - - - catchCursor(){ - let getCanvas = document.getElementById(this.elementID); - - getCanvas.addEventListener("pointermove", (e) => { - //console.log("pointerMove"); - - if (this.isOnCanvas) { - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - //console.log(inclinationX + ' ' + inclinationY + ' ' + pressure); - } - }, false); - - getCanvas.addEventListener("pointerdown", (e) => { - //console.log("pointerDown"); - getCanvas.setPointerCapture(e.pointerId); - this.isOnCanvas = true; - - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - }, false); - - getCanvas.addEventListener("pointerup", (e) => { - //console.log("pointerUp"); - - if (this.isOnCanvas) { - getCanvas.releasePointerCapture(e.pointerId); - this.isOnCanvas = false; - - this.tiltX = e.tiltX; - this.tiltY = e.tiltY; - this.pressure = e.pressure; - - //console.log(inclinationX + ' ' + inclinationY + ' ' + pressure); - - } - }, false); - } - - - // ----------------------------------------- - // ----------------------------------------- - - - calculateAngle(){ - this.targetAngle = atan2(this.tiltY, this.tiltX); - } - - - // ----------------------------------------- - // ----------------------------------------- - - - showData(){ - // LIVE COORDINATES - push(); - noFill(); - stroke('#fff') - text('pressure: ' + this.pressure, 10, 30); - text('tilt_X: ' + this.tiltX, 10, 50); - text('tilt_Y: ' + this.tiltY, 10, 70); - text('angle arctan: ' + this.targetAngle, 10, 90); - pop(); - } - - // ----------------------------------------- - // ----------------------------------------- - - - mapPressure(){ - this.diameter = map(this.pressure, 0, 1, 1, 3); - } - - // ----------------------------------------- - // ----------------------------------------- - - - process_rotate(){ - translate(mouseX, mouseY); //mouseX & mouseY - rotate(this.targetAngle); - translate(-mouseX, -mouseY); // -mouseX & -mouseY - } - - // ----------------------------------------- - // ----------------------------------------- - - - showCursor(mouseX, mouseY){ - // POINTER CENTER - push(); - noStroke(); - fill(0, 0, 0); - circle(mouseX, mouseY, 20); - pop(); - - // RECTANGLE SHAPE - push(); - this.process_rotate() - - noFill(); - stroke(2) - rectMode(CENTER) - rect(mouseX, mouseY, this.diameter, 30); // reacts to pen pressure value - - noStroke(); - fill('yellow'); - circle(mouseX, mouseY, 10); // shows the pivot point - pop(); - - // POINTS FROM STYLUS AT GOOD INCLINATION & PRESSURE VALUE - push(); - this.process_rotate(); - noFill(); - stroke(1); - ellipseMode(CENTER); - circle(mouseX, mouseY + this.diameter, 10); // LEFT || WEST - circle(mouseX + this.diameter, mouseY, 10);// DOWN || SOUTH - circle(mouseX, mouseY - this.diameter, 10); // RIGHT || EAST - circle(mouseX - this.diameter, mouseY, 10); // UP || NORTH - - - pop(); - - circle(mouseX + this.diameter/4 * cos(this.targetAngle), mouseY + this.diameter/4 * sin(this.targetAngle), 1) - circle(mouseX + this.diameter/4 * cos(this.targetAngle + PI), mouseY + this.diameter/4 * sin(this.targetAngle+ PI), 1) - - - // TILT AXIS & LENGTH - push(); - fill('red'); - circle(mouseX + this.tiltX, mouseY + this.tiltY, 10); - - pop(); - - push(); - fill('blue'); - circle(mouseX - this.tiltX, mouseY - this.tiltY, 10); - pop(); - } - - } \ No newline at end of file diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/visualizers/colors.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/visualizers/colors.py deleted file mode 100644 index 9e9e39182c58cb06a1c5e97a7e6c497cc3388ebe..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/visualizers/colors.py +++ /dev/null @@ -1,76 +0,0 @@ -import random -import colorsys - -import numpy as np -import matplotlib -matplotlib.use('agg') -import matplotlib.pyplot as plt -from matplotlib.colors import LinearSegmentedColormap - - -def generate_colors(nlabels, type='bright', first_color_black=False, last_color_black=True, verbose=False): - # https://stackoverflow.com/questions/14720331/how-to-generate-random-colors-in-matplotlib - """ - Creates a random colormap to be used together with matplotlib. Useful for segmentation tasks - :param nlabels: Number of labels (size of colormap) - :param type: 'bright' for strong colors, 'soft' for pastel colors - :param first_color_black: Option to use first color as black, True or False - :param last_color_black: Option to use last color as black, True or False - :param verbose: Prints the number of labels and shows the colormap. True or False - :return: colormap for matplotlib - """ - if type not in ('bright', 'soft'): - print ('Please choose "bright" or "soft" for type') - return - - if verbose: - print('Number of labels: ' + str(nlabels)) - - # Generate color map for bright colors, based on hsv - if type == 'bright': - randHSVcolors = [(np.random.uniform(low=0.0, high=1), - np.random.uniform(low=0.2, high=1), - np.random.uniform(low=0.9, high=1)) for i in range(nlabels)] - - # Convert HSV list to RGB - randRGBcolors = [] - for HSVcolor in randHSVcolors: - randRGBcolors.append(colorsys.hsv_to_rgb(HSVcolor[0], HSVcolor[1], HSVcolor[2])) - - if first_color_black: - randRGBcolors[0] = [0, 0, 0] - - if last_color_black: - randRGBcolors[-1] = [0, 0, 0] - - random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels) - - # Generate soft pastel colors, by limiting the RGB spectrum - if type == 'soft': - low = 0.6 - high = 0.95 - randRGBcolors = [(np.random.uniform(low=low, high=high), - np.random.uniform(low=low, high=high), - np.random.uniform(low=low, high=high)) for i in range(nlabels)] - - if first_color_black: - randRGBcolors[0] = [0, 0, 0] - - if last_color_black: - randRGBcolors[-1] = [0, 0, 0] - random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels) - - # Display colorbar - if verbose: - from matplotlib import colors, colorbar - from matplotlib import pyplot as plt - fig, ax = plt.subplots(1, 1, figsize=(15, 0.5)) - - bounds = np.linspace(0, nlabels, nlabels + 1) - norm = colors.BoundaryNorm(bounds, nlabels) - - cb = colorbar.ColorbarBase(ax, cmap=random_colormap, norm=norm, spacing='proportional', ticks=None, - boundaries=bounds, format='%1i', orientation=u'horizontal') - - return randRGBcolors, random_colormap - diff --git a/spaces/flax-community/Multilingual-VQA/README.md b/spaces/flax-community/Multilingual-VQA/README.md deleted file mode 100644 index f25d66135e16f69044ad8398df5f93fd30c5d67c..0000000000000000000000000000000000000000 --- a/spaces/flax-community/Multilingual-VQA/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Multilingual VQA -thumbnail: https://github.com/gchhablani/multilingual-vqa/raw/main/mvqa-logo-3-white.png -emoji: 👁️‍🗨️ -colorFrom: pink -colorTo: red -sdk_version: 0.88.0 -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/flowers-team/SocialAISchool/utils/env.py b/spaces/flowers-team/SocialAISchool/utils/env.py deleted file mode 100644 index 4f4fd5f0f995c733a789727bd869e4a8e4239829..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/utils/env.py +++ /dev/null @@ -1,16 +0,0 @@ -import gym -import gym_minigrid - - -def make_env(env_key, seed=None, env_args={}): - env = gym.make(env_key, **env_args) - env.seed(seed) - return env - - -def env_args_str_to_dict(env_args_str): - if not env_args_str: - return {} - keys = env_args_str[::2] # Every even element is a key - vals = env_args_str[1::2] # Every odd element is a value - return dict(zip(keys, [eval(v) for v in vals])) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/info.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/info.py deleted file mode 100644 index 29f2e5598ae2bb5866ccd15a7d3b4de33c0cd14d..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/info.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os - -import torch - -if torch.__version__ == 'parrots': - import parrots - - def get_compiler_version(): - return 'GCC ' + parrots.version.compiler - - def get_compiling_cuda_version(): - return parrots.version.cuda -else: - from ..utils import ext_loader - ext_module = ext_loader.load_ext( - '_ext', ['get_compiler_version', 'get_compiling_cuda_version']) - - def get_compiler_version(): - return ext_module.get_compiler_version() - - def get_compiling_cuda_version(): - return ext_module.get_compiling_cuda_version() - - -def get_onnxruntime_op_path(): - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_ort.*.so') - - paths = glob.glob(wildcard) - if len(paths) > 0: - return paths[0] - else: - return '' diff --git a/spaces/ghuron/artist/sql.py b/spaces/ghuron/artist/sql.py deleted file mode 100644 index 30685f4e383889df0a7a255413ad30160052df76..0000000000000000000000000000000000000000 --- a/spaces/ghuron/artist/sql.py +++ /dev/null @@ -1,36 +0,0 @@ -import sqlite3 as sq -import threading - -thread_local = threading.local() - -def get_connection(): - if not hasattr(thread_local, "connection"): - thread_local.connection = sq.connect('dataset/astro.sql') - - return thread_local.connection - - -def get_cursor(): - if not hasattr(thread_local, "cursor"): - thread_local.cursor = get_connection().cursor() - - return thread_local.cursor - - -def get_article(index:int): - cur = get_cursor() - cur.execute(f'SELECT id, title, abstract, strftime("%Y-%m-%d", date) FROM astro WHERE "index" = {index}') - return cur.fetchone() - - -def get_index_articles(i_year, f_year): - cur = get_cursor() - cur.execute(f'SELECT "index" FROM astro WHERE date BETWEEN "{i_year}-01-01" AND "{f_year}-12-31";') - return cur.fetchall() - -def year_range(): - cur = get_cursor() - cur.execute("SELECT strftime('%Y', MIN(date)), strftime('%Y', MAX(date)) FROM astro;") - return [int(year) for year in cur.fetchone()] - - \ No newline at end of file diff --git a/spaces/gigant/slideshow_extraction/README.md b/spaces/gigant/slideshow_extraction/README.md deleted file mode 100644 index faf1ce919657f287d8e592cad6e9af369528d5fc..0000000000000000000000000000000000000000 --- a/spaces/gigant/slideshow_extraction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Slideshow Extraction -emoji: 🎞️2️📊 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.11 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/giswqs/solara-maxar/pages/00_home.py b/spaces/giswqs/solara-maxar/pages/00_home.py deleted file mode 100644 index 2374f6ed9f77b3d2c7ac99fcbfbc9edd8251bef4..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-maxar/pages/00_home.py +++ /dev/null @@ -1,20 +0,0 @@ -import solara - - -@solara.component -def Page(): - with solara.Column(align="center"): - markdown = """ - ## A Solara Web App for Visualizing [Maxar Open Data](https://www.maxar.com/open-data) - - ### Introduction - - **A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications.** - - - Web App: - - GitHub: - - Hugging Face: - - """ - - solara.Markdown(markdown) diff --git a/spaces/gossminn/fillmorle-app/sftp/metrics/exact_match.py b/spaces/gossminn/fillmorle-app/sftp/metrics/exact_match.py deleted file mode 100644 index 2d6596b29ee30969c981c52c1cf1fd3381ea53d1..0000000000000000000000000000000000000000 --- a/spaces/gossminn/fillmorle-app/sftp/metrics/exact_match.py +++ /dev/null @@ -1,29 +0,0 @@ -from allennlp.training.metrics import Metric -from overrides import overrides - -from .base_f import BaseF -from ..utils import Span - - -@Metric.register('exact_match') -class ExactMatch(BaseF): - def __init__(self, check_type: bool): - self.check_type = check_type - if check_type: - super(ExactMatch, self).__init__('em') - else: - super(ExactMatch, self).__init__('sm') - - @overrides - def __call__( - self, - prediction: Span, - gold: Span, - ): - tp = prediction.match(gold, self.check_type) - 1 - fp = prediction.n_nodes - tp - 1 - fn = gold.n_nodes - tp - 1 - assert tp >= 0 and fp >= 0 and fn >= 0 - self.tp += tp - self.fp += fp - self.fn += fn diff --git a/spaces/gotiQspiryo/whisper-ui/Descargar-Stc-2000-Para-Rfactor-196-LINK.md b/spaces/gotiQspiryo/whisper-ui/Descargar-Stc-2000-Para-Rfactor-196-LINK.md deleted file mode 100644 index 422595669bb4654bcf086732eaa34ea4d8d1acbd..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/Descargar-Stc-2000-Para-Rfactor-196-LINK.md +++ /dev/null @@ -1,116 +0,0 @@ -## Descargar Stc 2000 Para Rfactor 196 - - - - - - ![Descargar Stc 2000 Para Rfactor 196 - - - - - -

    Download File >>> \[https://vercupalo.blogspot.com/?d=2txnEI\](https://vercupalo.blogspot.com/?d=2txnEI)

    - - - - - - - - - - - - - - - - - - - -Here is a possible title and article with html formatting for the keyword ](https://k30.kn3.net/taringa/6/E/C/2/6/B/TateComics/30A.jpg) - -# ¿Cómo descargar el mod de STC2000 para rFactor? - - - -rFactor es un simulador de carreras de autos que permite a los usuarios crear y modificar sus propios vehículos, circuitos y competiciones. Uno de los mods más populares entre los aficionados al automovilismo argentino es el de STC2000, que recrea la categoría de Super Turismo Competición 2000, donde compiten modelos como el Chevrolet Cruze, el Renault Fluence, el Toyota Corolla y el Honda Civic. - - - -Para descargar el mod de STC2000 para rFactor, se necesita tener instalado previamente el juego base, que se puede adquirir en la página oficial de rFactor o en plataformas como Steam. Luego, se debe seguir estos pasos: - - - -1. Entrar al sitio web de [Descargar - Juegos PC](https://www.youtube.com/watch?v=sLXHPzC1AFw), que ofrece el mod de STC2000 junto con las pistas correspondientes[^1^]. - -2. - -3. Descargar los tres archivos que se encuentran en los enlaces de Mega: el primero contiene el rFactor con el crack, el segundo contiene el mod de STC2000 y el tercero contiene las pistas[^1^]. - -4. - -5. Descomprimir los archivos con un programa como WinRAR o 7-Zip. - -6. - -7. Copiar el contenido de la carpeta rFactor dentro de la carpeta donde se instaló el juego base. - -8. - -9. Copiar el contenido de la carpeta STC2000 dentro de la carpeta GameData/Vehicles del juego base. - -10. - -11. Copiar el contenido de la carpeta Pistas dentro de la carpeta GameData/Locations del juego base. - -12. - -13. Ejecutar el juego y seleccionar el mod de STC2000 en el menú principal. - -14. - - - -De esta manera, se podrá disfrutar del mod de STC2000 para rFactor, que ofrece una experiencia realista y desafiante para los amantes de las carreras. El mod cuenta con los autos, los pilotos, los equipos y los circuitos oficiales de la temporada 2019/20 de la categoría[^2^], así como con un sistema de física y daños mejorado. Además, se puede jugar tanto en modo individual como en modo multijugador online. - -Here is a possible continuation of the article: - -El mod de STC2000 para rFactor no solo ofrece diversión y emoción, sino también una oportunidad de aprender y mejorar las habilidades de conducción. Algunos consejos para sacar el máximo provecho del mod son: - - - -- Ajustar la configuración del juego y del volante o el teclado según las preferencias personales y el nivel de dificultad deseado. - -- - -- Practicar en el modo de entrenamiento libre para familiarizarse con los controles, los autos y los circuitos. - -- - -- Estudiar las características de cada pista, como las curvas, las rectas, los desniveles y los puntos de frenado. - -- - -- Observar el comportamiento de los rivales y aprender de sus estrategias y errores. - -- - -- Respetar las normas de la competición y evitar las colisiones y las sanciones. - -- - -- Divertirse y disfrutar del mod de STC2000 para rFactor, que es uno de los mejores mods disponibles para el simulador. - -- - - - - dfd1c89656 - - - - - diff --git a/spaces/gotiQspiryo/whisper-ui/examples/3d Desktop Colossus 3g For Desktopx Download ((BETTER)) Gratis.md b/spaces/gotiQspiryo/whisper-ui/examples/3d Desktop Colossus 3g For Desktopx Download ((BETTER)) Gratis.md deleted file mode 100644 index c2c29b65a5e101abff4bd74adc6ee90d83a1c720..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/3d Desktop Colossus 3g For Desktopx Download ((BETTER)) Gratis.md +++ /dev/null @@ -1,6 +0,0 @@ -

    3d Desktop Colossus 3g For Desktopx Download Gratis


    DOWNLOAD ○○○ https://urlgoal.com/2uyNgI



    - -Download and Play today to add Fright Fest related Six Flags assets to your. Hilton St. ... chip for xp driver Descargar gratis 3d desktop colossus 3g for desktopx. 1fdad05405
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/AMARA FLASH SUITE 2.0.rar VERIFIED Full Version.md b/spaces/gotiQspiryo/whisper-ui/examples/AMARA FLASH SUITE 2.0.rar VERIFIED Full Version.md deleted file mode 100644 index bb5d8f66e3305562f9c3ab5f18c8fe264f51cf9e..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/AMARA FLASH SUITE 2.0.rar VERIFIED Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    AMARA FLASH SUITE 2.0.rar full version


    Downloadhttps://urlgoal.com/2uyNki



    -
    -To create more accurate search results for Amara Flash Suite 2.0 try to exclude using commonly used keywords such as: crack, download, serial, keygen, torrent ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/gradio/gpt-neo/export.py b/spaces/gradio/gpt-neo/export.py deleted file mode 100644 index 82ad4d649fe8201700502e733428dd46d5f55bbe..0000000000000000000000000000000000000000 --- a/spaces/gradio/gpt-neo/export.py +++ /dev/null @@ -1,14 +0,0 @@ -import tensorflow.compat.v1 as tf - -def export_model(estimator, export_dir, params, - checkpoint_path=None): - - - def serving_input_receiver_fn(): - t = tf.placeholder(dtype=tf.int64, - shape=[1, params["n_ctx"]], - name='input_example_tensor') - return tf.estimator.export.ServingInputReceiver(t, t) - - return estimator.export_saved_model( - export_dir, serving_input_receiver_fn, checkpoint_path=checkpoint_path) \ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/data/template_dataset.py b/spaces/gwang-kim/DATID-3D/pose_estimation/data/template_dataset.py deleted file mode 100644 index bfdf16be2a8a834b204c45d88c86857b37b9bd25..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/data/template_dataset.py +++ /dev/null @@ -1,75 +0,0 @@ -"""Dataset class template - -This module provides a template for users to implement custom datasets. -You can specify '--dataset_mode template' to use this dataset. -The class name should be consistent with both the filename and its dataset_mode option. -The filename should be _dataset.py -The class name should be Dataset.py -You need to implement the following functions: - -- : Add dataset-specific options and rewrite default values for existing options. - -- <__init__>: Initialize this dataset class. - -- <__getitem__>: Return a data point and its metadata information. - -- <__len__>: Return the number of images. -""" -from data.base_dataset import BaseDataset, get_transform -# from data.image_folder import make_dataset -# from PIL import Image - - -class TemplateDataset(BaseDataset): - """A template dataset class for you to implement custom datasets.""" - @staticmethod - def modify_commandline_options(parser, is_train): - """Add new dataset-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - parser.add_argument('--new_dataset_option', type=float, default=1.0, help='new dataset option') - parser.set_defaults(max_dataset_size=10, new_dataset_option=2.0) # specify dataset-specific default values - return parser - - def __init__(self, opt): - """Initialize this dataset class. - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - - A few things can be done here. - - save the options (have been done in BaseDataset) - - get image paths and meta information of the dataset. - - define the image transformation. - """ - # save the option and dataset root - BaseDataset.__init__(self, opt) - # get the image paths of your dataset; - self.image_paths = [] # You can call sorted(make_dataset(self.root, opt.max_dataset_size)) to get all the image paths under the directory self.root - # define the default transform function. You can use ; You can also define your custom transform function - self.transform = get_transform(opt) - - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index -- a random integer for data indexing - - Returns: - a dictionary of data with their names. It usually contains the data itself and its metadata information. - - Step 1: get a random image path: e.g., path = self.image_paths[index] - Step 2: load your data from the disk: e.g., image = Image.open(path).convert('RGB'). - Step 3: convert your data to a PyTorch tensor. You can use helpder functions such as self.transform. e.g., data = self.transform(image) - Step 4: return a data point as a dictionary. - """ - path = 'temp' # needs to be a string - data_A = None # needs to be a tensor - data_B = None # needs to be a tensor - return {'data_A': data_A, 'data_B': data_B, 'path': path} - - def __len__(self): - """Return the total number of images.""" - return len(self.image_paths) diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/utils/utils_config.py b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/editor.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/editor.py deleted file mode 100644 index b1c2ac56fd7b4b127f948c6b8cf15874a8fe9d93..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/editor.py +++ /dev/null @@ -1,507 +0,0 @@ -# python 3.7 -"""Utility functions for image editing from latent space.""" - -import os.path -import numpy as np - -__all__ = [ - 'parse_indices', 'interpolate', 'mix_style', - 'get_layerwise_manipulation_strength', 'manipulate', 'parse_boundary_list' -] - - -def parse_indices(obj, min_val=None, max_val=None): - """Parses indices. - - If the input is a list or tuple, this function has no effect. - - The input can also be a string, which is either a comma separated list of - numbers 'a, b, c', or a dash separated range 'a - c'. Space in the string will - be ignored. - - Args: - obj: The input object to parse indices from. - min_val: If not `None`, this function will check that all indices are equal - to or larger than this value. (default: None) - max_val: If not `None`, this function will check that all indices are equal - to or smaller than this field. (default: None) - - Returns: - A list of integers. - - Raises: - If the input is invalid, i.e., neither a list or tuple, nor a string. - """ - if obj is None or obj == '': - indices = [] - elif isinstance(obj, int): - indices = [obj] - elif isinstance(obj, (list, tuple, np.ndarray)): - indices = list(obj) - elif isinstance(obj, str): - indices = [] - splits = obj.replace(' ', '').split(',') - for split in splits: - numbers = list(map(int, split.split('-'))) - if len(numbers) == 1: - indices.append(numbers[0]) - elif len(numbers) == 2: - indices.extend(list(range(numbers[0], numbers[1] + 1))) - else: - raise ValueError(f'Invalid type of input: {type(obj)}!') - - assert isinstance(indices, list) - indices = sorted(list(set(indices))) - for idx in indices: - assert isinstance(idx, int) - if min_val is not None: - assert idx >= min_val, f'{idx} is smaller than min val `{min_val}`!' - if max_val is not None: - assert idx <= max_val, f'{idx} is larger than max val `{max_val}`!' - - return indices - - -def interpolate(src_codes, dst_codes, step=5): - """Interpolates two sets of latent codes linearly. - - Args: - src_codes: Source codes, with shape [num, *code_shape]. - dst_codes: Target codes, with shape [num, *code_shape]. - step: Number of interplolation steps, with source and target included. For - example, if `step = 5`, three more samples will be inserted. (default: 5) - - Returns: - Interpolated codes, with shape [num, step, *code_shape]. - - Raises: - ValueError: If the input two sets of latent codes are with different shapes. - """ - if not (src_codes.ndim >= 2 and src_codes.shape == dst_codes.shape): - raise ValueError(f'Shapes of source codes and target codes should both be ' - f'[num, *code_shape], but {src_codes.shape} and ' - f'{dst_codes.shape} are received!') - num = src_codes.shape[0] - code_shape = src_codes.shape[1:] - - a = src_codes[:, np.newaxis] - b = dst_codes[:, np.newaxis] - l = np.linspace(0.0, 1.0, step).reshape( - [step if axis == 1 else 1 for axis in range(a.ndim)]) - results = a + l * (b - a) - assert results.shape == (num, step, *code_shape) - - return results - - -def mix_style(style_codes, - content_codes, - num_layers=1, - mix_layers=None, - is_style_layerwise=True, - is_content_layerwise=True): - """Mixes styles from style codes to those of content codes. - - Each style code or content code consists of `num_layers` codes, each of which - is typically fed into a particular layer of the generator. This function mixes - styles by partially replacing the codes of `content_codes` from some certain - layers with those of `style_codes`. - - For example, if both style code and content code are with shape [10, 512], - meaning to have 10 layers and each employs a 512-dimensional latent code. And - the 1st, 2nd, and 3rd layers are the target layers to perform style mixing. - Then the top half of the content code (with shape [3, 512]) will be replaced - by the top half of the style code (also with shape [3, 512]). - - NOTE: This function also supports taking single-layer latent codes as inputs, - i.e., setting `is_style_layerwise` or `is_content_layerwise` as False. In this - case, the corresponding code will be first repeated for `num_layers` before - performing style mixing. - - Args: - style_codes: Style codes, with shape [num_styles, *code_shape] or - [num_styles, num_layers, *code_shape]. - content_codes: Content codes, with shape [num_contents, *code_shape] or - [num_contents, num_layers, *code_shape]. - num_layers: Total number of layers in the generative model. (default: 1) - mix_layers: Indices of the layers to perform style mixing. `None` means to - replace all layers, in which case the content code will be completely - replaced by style code. (default: None) - is_style_layerwise: Indicating whether the input `style_codes` are - layer-wise codes. (default: True) - is_content_layerwise: Indicating whether the input `content_codes` are - layer-wise codes. (default: True) - num_layers - - Returns: - Codes after style mixing, with shape [num_styles, num_contents, num_layers, - *code_shape]. - - Raises: - ValueError: If input `content_codes` or `style_codes` is with invalid shape. - """ - if not is_style_layerwise: - style_codes = style_codes[:, np.newaxis] - style_codes = np.tile( - style_codes, - [num_layers if axis == 1 else 1 for axis in range(style_codes.ndim)]) - if not is_content_layerwise: - content_codes = content_codes[:, np.newaxis] - content_codes = np.tile( - content_codes, - [num_layers if axis == 1 else 1 for axis in range(content_codes.ndim)]) - - if not (style_codes.ndim >= 3 and style_codes.shape[1] == num_layers and - style_codes.shape[1:] == content_codes.shape[1:]): - raise ValueError(f'Shapes of style codes and content codes should be ' - f'[num_styles, num_layers, *code_shape] and ' - f'[num_contents, num_layers, *code_shape] respectively, ' - f'but {style_codes.shape} and {content_codes.shape} are ' - f'received!') - - layer_indices = parse_indices(mix_layers, min_val=0, max_val=num_layers - 1) - if not layer_indices: - layer_indices = list(range(num_layers)) - - num_styles = style_codes.shape[0] - num_contents = content_codes.shape[0] - code_shape = content_codes.shape[2:] - - s = style_codes[:, np.newaxis] - s = np.tile(s, [num_contents if axis == 1 else 1 for axis in range(s.ndim)]) - c = content_codes[np.newaxis] - c = np.tile(c, [num_styles if axis == 0 else 1 for axis in range(c.ndim)]) - - from_style = np.zeros(s.shape, dtype=bool) - from_style[:, :, layer_indices] = True - results = np.where(from_style, s, c) - assert results.shape == (num_styles, num_contents, num_layers, *code_shape) - - return results - - -def get_layerwise_manipulation_strength(num_layers, - truncation_psi, - truncation_layers): - """Gets layer-wise strength for manipulation. - - Recall the truncation trick played on layer [0, truncation_layers): - - w = truncation_psi * w + (1 - truncation_psi) * w_avg - - So, when using the same boundary to manipulate different layers, layer - [0, truncation_layers) and layer [truncation_layers, num_layers) should use - different strength to eliminate the effect from the truncation trick. More - concretely, the strength for layer [0, truncation_layers) is set as - `truncation_psi`, while that for other layers are set as 1. - """ - strength = [1.0 for _ in range(num_layers)] - if truncation_layers > 0: - for layer_idx in range(0, truncation_layers): - strength[layer_idx] = truncation_psi - return strength - - -def manipulate(latent_codes, - boundary, - start_distance=-5.0, - end_distance=5.0, - step=21, - layerwise_manipulation=False, - num_layers=1, - manipulate_layers=None, - is_code_layerwise=False, - is_boundary_layerwise=False, - layerwise_manipulation_strength=1.0): - """Manipulates the given latent codes with respect to a particular boundary. - - Basically, this function takes a set of latent codes and a boundary as inputs, - and outputs a collection of manipulated latent codes. - - For example, let `step` to be 10, `latent_codes` to be with shape [num, - *code_shape], and `boundary` to be with shape [1, *code_shape] and unit norm. - Then the output will be with shape [num, 10, *code_shape]. For each 10-element - manipulated codes, the first code is `start_distance` away from the original - code (i.e., the input) along the `boundary` direction, while the last code is - `end_distance` away. Remaining codes are linearly interpolated. Here, - `distance` is sign sensitive. - - NOTE: This function also supports layer-wise manipulation, in which case the - generator should be able to take layer-wise latent codes as inputs. For - example, if the generator has 18 convolutional layers in total, and each of - which takes an independent latent code as input. It is possible, sometimes - with even better performance, to only partially manipulate these latent codes - corresponding to some certain layers yet keeping others untouched. - - NOTE: Boundary is assumed to be normalized to unit norm already. - - Args: - latent_codes: The input latent codes for manipulation, with shape - [num, *code_shape] or [num, num_layers, *code_shape]. - boundary: The semantic boundary as reference, with shape [1, *code_shape] or - [1, num_layers, *code_shape]. - start_distance: Start point for manipulation. (default: -5.0) - end_distance: End point for manipulation. (default: 5.0) - step: Number of manipulation steps. (default: 21) - layerwise_manipulation: Whether to perform layer-wise manipulation. - (default: False) - num_layers: Number of layers. Only active when `layerwise_manipulation` is - set as `True`. Should be a positive integer. (default: 1) - manipulate_layers: Indices of the layers to perform manipulation. `None` - means to manipulate latent codes from all layers. (default: None) - is_code_layerwise: Whether the input latent codes are layer-wise. If set as - `False`, the function will first repeat the input codes for `num_layers` - times before perform manipulation. (default: False) - is_boundary_layerwise: Whether the input boundary is layer-wise. If set as - `False`, the function will first repeat boundary for `num_layers` times - before perform manipulation. (default: False) - layerwise_manipulation_strength: Manipulation strength for each layer. Only - active when `layerwise_manipulation` is set as `True`. This field can be - used to resolve the strength discrepancy across layers when truncation - trick is on. See function `get_layerwise_manipulation_strength()` for - details. A tuple, list, or `numpy.ndarray` is expected. If set as a single - number, this strength will be used for all layers. (default: 1.0) - - Returns: - Manipulated codes, with shape [num, step, *code_shape] if - `layerwise_manipulation` is set as `False`, or shape [num, step, - num_layers, *code_shape] if `layerwise_manipulation` is set as `True`. - - Raises: - ValueError: If the input latent codes, boundary, or strength are with - invalid shape. - """ - if not (boundary.ndim >= 2 and boundary.shape[0] == 1): - raise ValueError(f'Boundary should be with shape [1, *code_shape] or ' - f'[1, num_layers, *code_shape], but ' - f'{boundary.shape} is received!') - - if not layerwise_manipulation: - assert not is_code_layerwise - assert not is_boundary_layerwise - num_layers = 1 - manipulate_layers = None - layerwise_manipulation_strength = 1.0 - - # Preprocessing for layer-wise manipulation. - # Parse indices of manipulation layers. - layer_indices = parse_indices( - manipulate_layers, min_val=0, max_val=num_layers - 1) - if not layer_indices: - layer_indices = list(range(num_layers)) - # Make latent codes layer-wise if needed. - assert num_layers > 0 - if not is_code_layerwise: - x = latent_codes[:, np.newaxis] - x = np.tile(x, [num_layers if axis == 1 else 1 for axis in range(x.ndim)]) - else: - x = latent_codes - if x.shape[1] != num_layers: - raise ValueError(f'Latent codes should be with shape [num, num_layers, ' - f'*code_shape], where `num_layers` equals to ' - f'{num_layers}, but {x.shape} is received!') - # Make boundary layer-wise if needed. - if not is_boundary_layerwise: - b = boundary - b = np.tile(b, [num_layers if axis == 0 else 1 for axis in range(b.ndim)]) - else: - b = boundary[0] - if b.shape[0] != num_layers: - raise ValueError(f'Boundary should be with shape [num_layers, ' - f'*code_shape], where `num_layers` equals to ' - f'{num_layers}, but {b.shape} is received!') - # Get layer-wise manipulation strength. - if isinstance(layerwise_manipulation_strength, (int, float)): - s = [float(layerwise_manipulation_strength) for _ in range(num_layers)] - elif isinstance(layerwise_manipulation_strength, (list, tuple)): - s = layerwise_manipulation_strength - if len(s) != num_layers: - raise ValueError(f'Shape of layer-wise manipulation strength `{len(s)}` ' - f'mismatches number of layers `{num_layers}`!') - elif isinstance(layerwise_manipulation_strength, np.ndarray): - s = layerwise_manipulation_strength - if s.size != num_layers: - raise ValueError(f'Shape of layer-wise manipulation strength `{s.size}` ' - f'mismatches number of layers `{num_layers}`!') - else: - raise ValueError(f'Unsupported type of `layerwise_manipulation_strength`!') - s = np.array(s).reshape( - [num_layers if axis == 0 else 1 for axis in range(b.ndim)]) - b = b * s - - if x.shape[1:] != b.shape: - raise ValueError(f'Latent code shape {x.shape} and boundary shape ' - f'{b.shape} mismatch!') - num = x.shape[0] - code_shape = x.shape[2:] - - x = x[:, np.newaxis] - b = b[np.newaxis, np.newaxis, :] - l = np.linspace(start_distance, end_distance, step).reshape( - [step if axis == 1 else 1 for axis in range(x.ndim)]) - results = np.tile(x, [step if axis == 1 else 1 for axis in range(x.ndim)]) - is_manipulatable = np.zeros(results.shape, dtype=bool) - is_manipulatable[:, :, layer_indices] = True - results = np.where(is_manipulatable, x + l * b, results) - assert results.shape == (num, step, num_layers, *code_shape) - - return results if layerwise_manipulation else results[:, :, 0] - - -def manipulate2(latent_codes, - proj, - mindex, - start_distance=-5.0, - end_distance=5.0, - step=21, - layerwise_manipulation=False, - num_layers=1, - manipulate_layers=None, - is_code_layerwise=False, - layerwise_manipulation_strength=1.0): - - - if not layerwise_manipulation: - assert not is_code_layerwise -# assert not is_boundary_layerwise - num_layers = 1 - manipulate_layers = None - layerwise_manipulation_strength = 1.0 - - # Preprocessing for layer-wise manipulation. - # Parse indices of manipulation layers. - layer_indices = parse_indices( - manipulate_layers, min_val=0, max_val=num_layers - 1) - if not layer_indices: - layer_indices = list(range(num_layers)) - # Make latent codes layer-wise if needed. - assert num_layers > 0 - if not is_code_layerwise: - x = latent_codes[:, np.newaxis] - x = np.tile(x, [num_layers if axis == 1 else 1 for axis in range(x.ndim)]) - else: - x = latent_codes - if x.shape[1] != num_layers: - raise ValueError(f'Latent codes should be with shape [num, num_layers, ' - f'*code_shape], where `num_layers` equals to ' - f'{num_layers}, but {x.shape} is received!') - # Make boundary layer-wise if needed. -# if not is_boundary_layerwise: -# b = boundary -# b = np.tile(b, [num_layers if axis == 0 else 1 for axis in range(b.ndim)]) -# else: -# b = boundary[0] -# if b.shape[0] != num_layers: -# raise ValueError(f'Boundary should be with shape [num_layers, ' -# f'*code_shape], where `num_layers` equals to ' -# f'{num_layers}, but {b.shape} is received!') - # Get layer-wise manipulation strength. - if isinstance(layerwise_manipulation_strength, (int, float)): - s = [float(layerwise_manipulation_strength) for _ in range(num_layers)] - elif isinstance(layerwise_manipulation_strength, (list, tuple)): - s = layerwise_manipulation_strength - if len(s) != num_layers: - raise ValueError(f'Shape of layer-wise manipulation strength `{len(s)}` ' - f'mismatches number of layers `{num_layers}`!') - elif isinstance(layerwise_manipulation_strength, np.ndarray): - s = layerwise_manipulation_strength - if s.size != num_layers: - raise ValueError(f'Shape of layer-wise manipulation strength `{s.size}` ' - f'mismatches number of layers `{num_layers}`!') - else: - raise ValueError(f'Unsupported type of `layerwise_manipulation_strength`!') -# s = np.array(s).reshape( -# [num_layers if axis == 0 else 1 for axis in range(b.ndim)]) -# b = b * s - -# if x.shape[1:] != b.shape: -# raise ValueError(f'Latent code shape {x.shape} and boundary shape ' -# f'{b.shape} mismatch!') - num = x.shape[0] - code_shape = x.shape[2:] - - x = x[:, np.newaxis] -# b = b[np.newaxis, np.newaxis, :] -# l = np.linspace(start_distance, end_distance, step).reshape( -# [step if axis == 1 else 1 for axis in range(x.ndim)]) - results = np.tile(x, [step if axis == 1 else 1 for axis in range(x.ndim)]) - is_manipulatable = np.zeros(results.shape, dtype=bool) - is_manipulatable[:, :, layer_indices] = True - - tmp=MPC(proj,x,mindex,start_distance,end_distance,step) - tmp = tmp[:, :,np.newaxis] - tmp1 = np.tile(tmp, [num_layers if axis == 2 else 1 for axis in range(tmp.ndim)]) - - - results = np.where(is_manipulatable, tmp1, results) -# print(results.shape) - assert results.shape == (num, step, num_layers, *code_shape) - return results if layerwise_manipulation else results[:, :, 0] - -def MPC(proj,x,mindex,start_distance,end_distance,step): - # x shape (batch_size,1,num_layers,feature) -# print(x.shape) - x1=proj.transform(x[:,0,0,:]) #/np.sqrt(proj.explained_variance_) # (batch_size,num_pc) - - x1 = x1[:, np.newaxis] - x1 = np.tile(x1, [step if axis == 1 else 1 for axis in range(x1.ndim)]) - - - l = np.linspace(start_distance, end_distance, step)[None,:] - x1[:,:,mindex]+=l - - tmp=x1.reshape((-1,x1.shape[-1])) #*np.sqrt(proj.explained_variance_) -# print('xxx') - x2=proj.inverse_transform(tmp) - x2=x2.reshape((x1.shape[0],x1.shape[1],-1)) - -# x1 = x1[:, np.newaxis] -# x1 = np.tile(x1, [step if axis == 1 else 1 for axis in range(x1.ndim)]) - - return x2 - - - - -def parse_boundary_list(boundary_list_path): - """Parses boundary list. - - Sometimes, a text file containing a list of boundaries will significantly - simplify image manipulation with a large amount of boundaries. This function - is used to parse boundary information from such list file. - - Basically, each item in the list should be with format - `($NAME, $SPACE_TYPE): $PATH`. `DISABLE` at the beginning of the line can - disable a particular boundary. - - Sample: - - (age, z): $AGE_BOUNDARY_PATH - (gender, w): $GENDER_BOUNDARY_PATH - DISABLE(pose, wp): $POSE_BOUNDARY_PATH - - Args: - boundary_list_path: Path to the boundary list. - - Returns: - A dictionary, whose key is a two-element tuple (boundary_name, space_type) - and value is the corresponding boundary path. - - Raise: - ValueError: If the given boundary list does not exist. - """ - if not os.path.isfile(boundary_list_path): - raise ValueError(f'Boundary list `boundary_list_path` does not exist!') - - boundaries = {} - with open(boundary_list_path, 'r') as f: - for line in f: - if line[:len('DISABLE')] == 'DISABLE': - continue - boundary_info, boundary_path = line.strip().split(':') - boundary_name, space_type = boundary_info.strip()[1:-1].split(',') - boundary_name = boundary_name.strip() - space_type = space_type.strip().lower() - boundary_path = boundary_path.strip() - boundaries[(boundary_name, space_type)] = boundary_path - return boundaries diff --git a/spaces/h2oai/wave-tour/examples/message_bar.py b/spaces/h2oai/wave-tour/examples/message_bar.py deleted file mode 100644 index 963b025ba9767ad66056b0e3cd97f9cb5898bfe4..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/message_bar.py +++ /dev/null @@ -1,27 +0,0 @@ -# Form / Message Bar -# Use message bars to indicate relevant status information. -# #form #message_bar -# --- -from h2o_wave import site, ui - -page = site['/demo'] - -page['example'] = ui.form_card( - box='1 1 3 7', - items=[ - ui.message_bar(type='blocked', text='This action is blocked.'), - ui.message_bar(type='error', text='This is an error message'), - ui.message_bar(type='warning', text='This is a warning message.'), - ui.message_bar(type='info', text='This is an information message.'), - ui.message_bar(type='success', text='This is a success message.'), - ui.message_bar(type='danger', text='This is a danger message.'), - ui.message_bar(type='success', text='This is a **MARKDOWN** _message_.'), - ui.message_bar(type='success', text='This is an HTML message.'), - ui.message_bar(type='info', text='With a button.', buttons=[ui.button(name='btn', label='Button')]), - ui.message_bar(type='info', text='With a button as link.', - buttons=[ui.button(name='btn', label='Button', link=True)]), - ui.message_bar(type='info', text='With multiline text that should hopefully span at least 2 rows', - buttons=[ui.button(name='btn', label='Button')]), - ] -) -page.save() diff --git a/spaces/hamzapehlivan/StyleRes/options/__init__.py b/spaces/hamzapehlivan/StyleRes/options/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/conf.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/conf.py deleted file mode 100644 index 44e9f2b4db549a3a5ef1420b27d408915e86657c..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/conf.py +++ /dev/null @@ -1,335 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -# flake8: noqa - -# Configuration file for the Sphinx documentation builder. -# -# This file does only contain a selection of the most common options. For a -# full list see the documentation: -# http://www.sphinx-doc.org/en/master/config - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import sys -import mock -from sphinx.domains import Domain -from typing import Dict, List, Tuple - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -import sphinx_rtd_theme - - -class GithubURLDomain(Domain): - """ - Resolve certain links in markdown files to github source. - """ - - name = "githuburl" - ROOT = "https://github.com/facebookresearch/detectron2/blob/master/" - LINKED_DOC = ["tutorials/install", "tutorials/getting_started"] - - def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): - github_url = None - if not target.endswith("html") and target.startswith("../../"): - url = target.replace("../", "") - github_url = url - if fromdocname in self.LINKED_DOC: - # unresolved links in these docs are all github links - github_url = target - - if github_url is not None: - if github_url.endswith("MODEL_ZOO") or github_url.endswith("README"): - # bug of recommonmark. - # https://github.com/readthedocs/recommonmark/blob/ddd56e7717e9745f11300059e4268e204138a6b1/recommonmark/parser.py#L152-L155 - github_url += ".md" - print("Ref {} resolved to github:{}".format(target, github_url)) - contnode["refuri"] = self.ROOT + github_url - return [("githuburl:any", contnode)] - else: - return [] - - -# to support markdown -from recommonmark.parser import CommonMarkParser - -sys.path.insert(0, os.path.abspath("../")) -os.environ["DOC_BUILDING"] = "True" -DEPLOY = os.environ.get("READTHEDOCS") == "True" - - -# -- Project information ----------------------------------------------------- - -# fmt: off -try: - import torch # noqa -except ImportError: - for m in [ - "torch", "torchvision", "torch.nn", "torch.nn.parallel", "torch.distributed", "torch.multiprocessing", "torch.autograd", - "torch.autograd.function", "torch.nn.modules", "torch.nn.modules.utils", "torch.utils", "torch.utils.data", "torch.onnx", - "torchvision", "torchvision.ops", - ]: - sys.modules[m] = mock.Mock(name=m) - sys.modules['torch'].__version__ = "1.5" # fake version - -for m in [ - "cv2", "scipy", "portalocker", "detectron2._C", - "pycocotools", "pycocotools.mask", "pycocotools.coco", "pycocotools.cocoeval", - "google", "google.protobuf", "google.protobuf.internal", "onnx", - "caffe2", "caffe2.proto", "caffe2.python", "caffe2.python.utils", "caffe2.python.onnx", "caffe2.python.onnx.backend", -]: - sys.modules[m] = mock.Mock(name=m) -# fmt: on -sys.modules["cv2"].__version__ = "3.4" - -import detectron2 # isort: skip - - -project = "detectron2" -copyright = "2019-2020, detectron2 contributors" -author = "detectron2 contributors" - -# The short X.Y version -version = detectron2.__version__ -# The full version, including alpha/beta/rc tags -release = version - - -# -- General configuration --------------------------------------------------- - -# If your documentation needs a minimal Sphinx version, state it here. -# -needs_sphinx = "3.0" - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "recommonmark", - "sphinx.ext.autodoc", - "sphinx.ext.napoleon", - "sphinx.ext.intersphinx", - "sphinx.ext.todo", - "sphinx.ext.coverage", - "sphinx.ext.mathjax", - "sphinx.ext.viewcode", - "sphinx.ext.githubpages", -] - -# -- Configurations for plugins ------------ -napoleon_google_docstring = True -napoleon_include_init_with_doc = True -napoleon_include_special_with_doc = True -napoleon_numpy_docstring = False -napoleon_use_rtype = False -autodoc_inherit_docstrings = False -autodoc_member_order = "bysource" - -if DEPLOY: - intersphinx_timeout = 10 -else: - # skip this when building locally - intersphinx_timeout = 0.1 -intersphinx_mapping = { - "python": ("https://docs.python.org/3.6", None), - "numpy": ("https://docs.scipy.org/doc/numpy/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} -# ------------------------- - - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -source_suffix = [".rst", ".md"] - -# The master toctree document. -master_doc = "index" - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "build", "README.md", "tutorials/README.md"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" - - -# -- Options for HTML output ------------------------------------------------- - -html_theme = "sphinx_rtd_theme" -html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# The default sidebars (for documents that don't match any pattern) are -# defined by theme itself. Builtin themes are using these templates by -# default: ``['localtoc.html', 'relations.html', 'sourcelink.html', -# 'searchbox.html']``. -# -# html_sidebars = {} - - -# -- Options for HTMLHelp output --------------------------------------------- - -# Output file base name for HTML help builder. -htmlhelp_basename = "detectron2doc" - - -# -- Options for LaTeX output ------------------------------------------------ - -latex_elements = { - # The paper size ('letterpaper' or 'a4paper'). - # - # 'papersize': 'letterpaper', - # The font size ('10pt', '11pt' or '12pt'). - # - # 'pointsize': '10pt', - # Additional stuff for the LaTeX preamble. - # - # 'preamble': '', - # Latex figure (float) alignment - # - # 'figure_align': 'htbp', -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - (master_doc, "detectron2.tex", "detectron2 Documentation", "detectron2 contributors", "manual") -] - - -# -- Options for manual page output ------------------------------------------ - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [(master_doc, "detectron2", "detectron2 Documentation", [author], 1)] - - -# -- Options for Texinfo output ---------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - ( - master_doc, - "detectron2", - "detectron2 Documentation", - author, - "detectron2", - "One line description of project.", - "Miscellaneous", - ) -] - - -# -- Options for todo extension ---------------------------------------------- - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = True - - -_DEPRECATED_NAMES = set() - - -def autodoc_skip_member(app, what, name, obj, skip, options): - # we hide something deliberately - if getattr(obj, "__HIDE_SPHINX_DOC__", False): - return True - # Hide some names that are deprecated or not intended to be used - if name in _DEPRECATED_NAMES: - return True - return None - - -_PAPER_DATA = { - "resnet": ("1512.03385", "Deep Residual Learning for Image Recognition"), - "fpn": ("1612.03144", "Feature Pyramid Networks for Object Detection"), - "mask r-cnn": ("1703.06870", "Mask R-CNN"), - "faster r-cnn": ( - "1506.01497", - "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", - ), - "deformconv": ("1703.06211", "Deformable Convolutional Networks"), - "deformconv2": ("1811.11168", "Deformable ConvNets v2: More Deformable, Better Results"), - "panopticfpn": ("1901.02446", "Panoptic Feature Pyramid Networks"), - "retinanet": ("1708.02002", "Focal Loss for Dense Object Detection"), - "cascade r-cnn": ("1712.00726", "Cascade R-CNN: Delving into High Quality Object Detection"), - "lvis": ("1908.03195", "LVIS: A Dataset for Large Vocabulary Instance Segmentation"), - "rrpn": ("1703.01086", "Arbitrary-Oriented Scene Text Detection via Rotation Proposals"), - "in1k1h": ("1706.02677", "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"), -} - - -def paper_ref_role( - typ: str, - rawtext: str, - text: str, - lineno: int, - inliner, - options: Dict = {}, - content: List[str] = [], -): - """ - Parse :paper:`xxx`. Similar to the "extlinks" sphinx extension. - """ - from docutils import nodes, utils - from sphinx.util.nodes import split_explicit_title - - text = utils.unescape(text) - has_explicit_title, title, link = split_explicit_title(text) - link = link.lower() - if link not in _PAPER_DATA: - inliner.reporter.warning("Cannot find paper " + link) - paper_url, paper_title = "#", link - else: - paper_url, paper_title = _PAPER_DATA[link] - if "/" not in paper_url: - paper_url = "https://arxiv.org/abs/" + paper_url - if not has_explicit_title: - title = paper_title - pnode = nodes.reference(title, title, internal=False, refuri=paper_url) - return [pnode], [] - - -def setup(app): - from recommonmark.transform import AutoStructify - - app.add_domain(GithubURLDomain) - app.connect("autodoc-skip-member", autodoc_skip_member) - app.add_role("paper", paper_ref_role) - app.add_config_value( - "recommonmark_config", - {"enable_math": True, "enable_inline_math": True, "enable_eval_rst": True}, - True, - ) - app.add_transform(AutoStructify) diff --git a/spaces/hdhzk/bingo/src/components/tailwind-indicator.tsx b/spaces/hdhzk/bingo/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
    -
    xs
    -
    sm
    -
    md
    -
    lg
    -
    xl
    -
    2xl
    -
    - ) -} diff --git a/spaces/hebert2099/MusicGen/audiocraft/modules/conv.py b/spaces/hebert2099/MusicGen/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/henryezell/freewilly/README.md b/spaces/henryezell/freewilly/README.md deleted file mode 100644 index 20aba09f80ff594c8b68f1de9662b21f05b9d9d9..0000000000000000000000000000000000000000 --- a/spaces/henryezell/freewilly/README.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: LabelStudio -emoji: 🟧 -colorFrom: yellow -colorTo: purple -sdk: docker -tags: -- label-studio -fullwidth: true -license: apache-2.0 -app_port: 8080 -duplicated_from: LabelStudio/LabelStudio ---- - - -[Website](https://hubs.ly/Q01CNgsd0) • [Docs](https://hubs.ly/Q01CN9Yq0) • [12K+ GitHub ⭐️!](https://hubs.ly/Q01CNbPQ0) • [Slack Community](https://hubs.ly/Q01CNb9H0) - -## What is Label Studio? - -Label Studio is an open source data labeling platform. It lets you label audio, -text, images, videos, and time series data with a simple, straightforward, and -highly-configurable user interface. Label Studio can prepare new data or -improve existing training data to get more accurate ML models. - - -## Label Studio in Hugging Face Spaces - -The Label Studio community is thrilled to offer Label Studio as a Hugging Face -Spaces application. You can try the data-annotation interface, connect popular -machine learning models, and share the application with collaborators. You can -start immediately by creating an account or replicate the space and work in -your own environment. - -## Creating a Use Account and Logging In - -Begin by creating a new account in the Label Studio space, then log in with your -credentials. - -**By default, these spaces permit anyone to create a new login -account, allowing them to view and modify project configuration, data sets, and -annotations. Without any modifications, treat this space like a demo environment.** - -## Creating a Labeling Project - -After logging in, Label Studio will present you with a project view. Here you -can create a new project with prompts to upload data and set up a custom -configuration interface. - -**Note that in the default configuration, storage is local and temporary. Any -projects, annotations, and configurations will be lost if the space is restarted.** - -## Next Steps and Additional Resources - -To help with getting started, the Label Studio community curated a list of -resources including tutorials and documentation. - -- 🚀 [Zero to One with Label Studio Tutorial](https://labelstud.io/blog/introduction-to-label-studio-in-hugging-face-spaces/) -- 📈 [Try Label Studio Enterprise](https://hubs.ly/Q01CMLll0) -- 🤗 [Tutorial: Using Label Studio with Hugging Face Datasets Hub](https://danielvanstrien.xyz/huggingface/huggingface-datasets/annotation/full%20stack%20deep%20learning%20notes/2022/09/07/label-studio-annotations-hub.html) -- 💡 [Label Studio Docs](https://hubs.ly/Q01CN9Yq0) - - -![Gif of Label Studio annotating different types of data](https://raw.githubusercontent.com/heartexlabs/label-studio/master/images/annotation_examples.gif) - -### Making your Label Studio Hugging Face Space production-ready - -By default this space allows for the unrestricted creation of new accounts -will full access to all projects and data. This is great for trying out -Label Studio and collaborating on projects, but you may want to restrict -access to your space to only authorized users. Add the following environment -variable to your spaces Dockerfile to disable public account creation for -this space. - - ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true - -Set secrets in your space to create an inital user, and log in with your -provided username and password. Do not set these in your Dockerfile, as they -globally visible on a public space. - - LABEL_STUDIO_USERNAME - LABEL_STUDIO_PASSWORD - -You will need to provide new users with an invitation link to join the space, -which can be found in the Organizations interface of Label Studio - -By default this space stores all project configuration and data annotations -in local storage with Sqlite. If the space is reset, all configuration and -annotation data in the space will be lost. You can enable configuration -persistence by connecting an external Postgres database to your space, -guaranteeing that all project and annotation settings are preserved. - -Set the following secret variables to match your own hosted instance of -Postgres. We strongly recommend setting these as secrets to prevent leaking -information about your database service to the public in your spaces -definition. - - DJANGO_DB=default - POSTGRE_NAME= - POSTGRE_PORT= - POSTGRE_USER= - POSTGRE_PASSWORD= - POSTGRE_PORT= - POSTGRE_HOST= - -Add the following environment variable to remove the warning about ephemeral -storage. - - ENV STORAGE_PERSISTENCE=1 - -Note that you will need to connect cloud storage to host data items that you -want to annotate, as local storage will not be preserved across a space reset. - -By default the only data storage enabled for this space is local. In the case -of a space reset, all data will be lost. To enable permanent storage, you -must enable a cloud storage connector. We also strongly recommend enabling -configuration persistence to preserve project data, annotations, and user -settings. Choose the appropriate cloud connector and configure the secrets -for it. - -#### Amazon S3 - STORAGE_TYPE=s3 - STORAGE_AWS_ACCESS_KEY_ID="" - STORAGE_AWS_SECRET_ACCESS_KEY="" - STORAGE_AWS_BUCKET_NAME="" - STORAGE_AWS_REGION_NAME="" - STORAGE_AWS_FOLDER="" - -#### Google Cloud Storage - - STORAGE_TYPE=gcs - STORAGE_GCS_BUCKET_NAME="" - STORAGE_GCS_PROJECT_ID="" - STORAGE_GCS_FOLDER="" - GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json" - -Azure Blob Storage -================== - - STORAGE_TYPE=azure - STORAGE_AZURE_ACCOUNT_NAME="" - STORAGE_AZURE_ACCOUNT_KEY="" - STORAGE_AZURE_CONTAINER_NAME="" - STORAGE_AZURE_FOLDER="" - - -## Questions? Concerns? Want to get involved? - -Email the community team at [community@labelstud.io](mailto:community@labelstud.io) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/data_augmentation/data_augmentation_noDA.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/data_augmentation/data_augmentation_noDA.py deleted file mode 100644 index f5fe2fc6db782b5f436a86b38c9d5376d02d8973..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/data_augmentation/data_augmentation_noDA.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from batchgenerators.dataloading.multi_threaded_augmenter import MultiThreadedAugmenter -from batchgenerators.transforms.abstract_transforms import Compose -from batchgenerators.transforms.channel_selection_transforms import DataChannelSelectionTransform, \ - SegChannelSelectionTransform -from batchgenerators.transforms.utility_transforms import RemoveLabelTransform, RenameTransform, NumpyToTensor - -from nnunet.training.data_augmentation.custom_transforms import ConvertSegmentationToRegionsTransform -from nnunet.training.data_augmentation.default_data_augmentation import default_3D_augmentation_params -from nnunet.training.data_augmentation.downsampling import DownsampleSegForDSTransform3, DownsampleSegForDSTransform2 - -try: - from batchgenerators.dataloading.nondet_multi_threaded_augmenter import NonDetMultiThreadedAugmenter -except ImportError as ie: - NonDetMultiThreadedAugmenter = None - - -def get_no_augmentation(dataloader_train, dataloader_val, params=default_3D_augmentation_params, - deep_supervision_scales=None, soft_ds=False, - classes=None, pin_memory=True, regions=None): - """ - use this instead of get_default_augmentation (drop in replacement) to turn off all data augmentation - """ - tr_transforms = [] - - if params.get("selected_data_channels") is not None: - tr_transforms.append(DataChannelSelectionTransform(params.get("selected_data_channels"))) - - if params.get("selected_seg_channels") is not None: - tr_transforms.append(SegChannelSelectionTransform(params.get("selected_seg_channels"))) - - tr_transforms.append(RemoveLabelTransform(-1, 0)) - - tr_transforms.append(RenameTransform('seg', 'target', True)) - - if regions is not None: - tr_transforms.append(ConvertSegmentationToRegionsTransform(regions, 'target', 'target')) - - if deep_supervision_scales is not None: - if soft_ds: - assert classes is not None - tr_transforms.append(DownsampleSegForDSTransform3(deep_supervision_scales, 'target', 'target', classes)) - else: - tr_transforms.append(DownsampleSegForDSTransform2(deep_supervision_scales, 0, input_key='target', - output_key='target')) - - tr_transforms.append(NumpyToTensor(['data', 'target'], 'float')) - - tr_transforms = Compose(tr_transforms) - - batchgenerator_train = MultiThreadedAugmenter(dataloader_train, tr_transforms, params.get('num_threads'), - params.get("num_cached_per_thread"), - seeds=range(params.get('num_threads')), pin_memory=pin_memory) - batchgenerator_train.restart() - - val_transforms = [] - val_transforms.append(RemoveLabelTransform(-1, 0)) - if params.get("selected_data_channels") is not None: - val_transforms.append(DataChannelSelectionTransform(params.get("selected_data_channels"))) - if params.get("selected_seg_channels") is not None: - val_transforms.append(SegChannelSelectionTransform(params.get("selected_seg_channels"))) - - val_transforms.append(RenameTransform('seg', 'target', True)) - - if regions is not None: - val_transforms.append(ConvertSegmentationToRegionsTransform(regions, 'target', 'target')) - - if deep_supervision_scales is not None: - if soft_ds: - assert classes is not None - val_transforms.append(DownsampleSegForDSTransform3(deep_supervision_scales, 'target', 'target', classes)) - else: - val_transforms.append(DownsampleSegForDSTransform2(deep_supervision_scales, 0, input_key='target', - output_key='target')) - - val_transforms.append(NumpyToTensor(['data', 'target'], 'float')) - val_transforms = Compose(val_transforms) - - batchgenerator_val = MultiThreadedAugmenter(dataloader_val, val_transforms, max(params.get('num_threads') // 2, 1), - params.get("num_cached_per_thread"), - seeds=range(max(params.get('num_threads') // 2, 1)), - pin_memory=pin_memory) - batchgenerator_val.restart() - return batchgenerator_train, batchgenerator_val - diff --git a/spaces/huggan/butterfly-gan/demo.py b/spaces/huggan/butterfly-gan/demo.py deleted file mode 100644 index 9f1473df56b436c367f373fe585b259bc34d25fe..0000000000000000000000000000000000000000 --- a/spaces/huggan/butterfly-gan/demo.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch -from huggan.pytorch.lightweight_gan.lightweight_gan import LightweightGAN -from datasets import load_dataset -from PIL import Image -import numpy as np -import paddlehub as hub -import random -from PIL import ImageDraw,ImageFont - -import streamlit as st - -@st.experimental_singleton -def load_bg_model(): - bg_model = hub.Module(name='U2NetP', directory='assets/models/') - return bg_model - - -bg_model = load_bg_model() -def remove_bg(img): - result = bg_model.Segmentation( - images=[np.array(img)[:,:,::-1]], - paths=None, - batch_size=1, - input_size=320, - output_dir=None, - visualization=False) - output = result[0] - mask=Image.fromarray(output['mask']) - front=Image.fromarray(output['front'][:,:,::-1]).convert("RGBA") - front.putalpha(mask) - return front - -meme_template=Image.open("./assets/pigeon_meme.jpg").convert("RGBA") -def make_meme(pigeon,text="Is this a pigeon?",show_text=True,remove_background=True): - - meme=meme_template.copy() - approx_butterfly_center=(850,30) - - if remove_background: - pigeon=remove_bg(pigeon) - - else: - pigeon=Image.fromarray(pigeon).convert("RGBA") - - random_rotate=random.randint(-30,30) - random_size=random.randint(150,200) - pigeon=pigeon.resize((random_size,random_size)).rotate(random_rotate,expand=True) - - meme.alpha_composite(pigeon, approx_butterfly_center) - - #ref: https://blog.lipsumarium.com/caption-memes-in-python/ - def drawTextWithOutline(text, x, y): - draw.text((x-2, y-2), text,(0,0,0),font=font) - draw.text((x+2, y-2), text,(0,0,0),font=font) - draw.text((x+2, y+2), text,(0,0,0),font=font) - draw.text((x-2, y+2), text,(0,0,0),font=font) - draw.text((x, y), text, (255,255,255), font=font) - - if show_text: - draw = ImageDraw.Draw(meme) - font_size=52 - font = ImageFont.truetype("assets/impact.ttf", font_size) - w, h = draw.textsize(text, font) # measure the size the text will take - drawTextWithOutline(text, meme.width/2 - w/2, meme.height - font_size*2) - meme = meme.convert("RGB") - return meme - -def get_train_data(dataset_name="huggan/smithsonian_butterflies_subset"): - dataset=load_dataset(dataset_name) - dataset=dataset.sort("sim_score") - return dataset["train"] - -from transformers import BeitFeatureExtractor, BeitForImageClassification -emb_feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224') -emb_model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224') -def embed(images): - inputs = emb_feature_extractor(images=images, return_tensors="pt") - outputs = emb_model(**inputs,output_hidden_states= True) - last_hidden=outputs.hidden_states[-1] - pooler=emb_model.base_model.pooler - final_emb=pooler(last_hidden).detach().numpy() - return final_emb - -def build_index(): - dataset=get_train_data() - ds_with_embeddings = dataset.map(lambda x: {"beit_embeddings":embed(x["image"])},batched=True,batch_size=20) - ds_with_embeddings.add_faiss_index(column='beit_embeddings') - ds_with_embeddings.save_faiss_index('beit_embeddings', 'beit_index.faiss') - -def get_dataset(): - dataset=get_train_data() - dataset.load_faiss_index('beit_embeddings', 'beit_index.faiss') - return dataset - -def load_model(model_name='ceyda/butterfly_cropped_uniq1K_512',model_version=None): - gan = LightweightGAN.from_pretrained(model_name,version=model_version) - gan.eval() - return gan - -def generate(gan,batch_size=1): - with torch.no_grad(): - ims = gan.G(torch.randn(batch_size, gan.latent_dim)).clamp_(0., 1.)*255 - ims = ims.permute(0,2,3,1).detach().cpu().numpy().astype(np.uint8) - return ims - -def interpolate(): - pass \ No newline at end of file diff --git a/spaces/huggingface/metric-explorer/bleu_metric_card.md b/spaces/huggingface/metric-explorer/bleu_metric_card.md deleted file mode 100644 index e8f93faa776a98f55ae33352ab816a23a1b49a12..0000000000000000000000000000000000000000 --- a/spaces/huggingface/metric-explorer/bleu_metric_card.md +++ /dev/null @@ -1,123 +0,0 @@ -# Metric Card for BLEU - - -## Metric Description -BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics. - -Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality. Neither intelligibility nor grammatical correctness are not taken into account. - -## Intended Uses -BLEU and BLEU-derived metrics are most often used for machine translation. - -## How to Use - -This metric takes as input lists of predicted sentences and reference sentences: - -```python ->>> predictions = [ -... ["hello", "there", "general", "kenobi", -... ["foo", "bar" "foobar"] -... ] ->>> references = [ -... [["hello", "there", "general", "kenobi"]], -... [["foo", "bar", "foobar"]] -... ] ->>> bleu = datasets.load_metric("bleu") ->>> results = bleu.compute(predictions=predictions, references=references) ->>> print(results) -{'bleu': 0.6370964381207871, 'precisions': [0.8333333333333334, 0.75, 1.0, 1.0], 'brevity_penalty': 0.7165313105737893, 'length_ratio': 0.75, 'translation_length': 6, 'reference_length': 8} -``` - -### Inputs -- **predictions** (`list` of `list`s): Translations to score. Each translation should be tokenized into a list of tokens. -- **references** (`list` of `list`s): references for each translation. Each reference should be tokenized into a list of tokens. -- **max_order** (`int`): Maximum n-gram order to use when computing BLEU score. Defaults to `4`. -- **smooth** (`boolean`): Whether or not to apply Lin et al. 2004 smoothing. Defaults to `False`. - -### Output Values -- **bleu** (`float`): bleu score -- **precisions** (`list` of `float`s): geometric mean of n-gram precisions, -- **brevity_penalty** (`float`): brevity penalty, -- **length_ratio** (`float`): ratio of lengths, -- **translation_length** (`int`): translation_length, -- **reference_length** (`int`): reference_length - -Output Example: -```python -{'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.167, 'translation_length': 7, 'reference_length': 6} -``` - -BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score. - -#### Values from Popular Papers -The [original BLEU paper](https://aclanthology.org/P02-1040/) (Papineni et al. 2002) compares BLEU scores of five different models on the same 500-sentence corpus. These scores ranged from 0.0527 to 0.2571. - -The [Attention is All you Need paper](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) (Vaswani et al. 2017) got a BLEU score of 0.284 on the WMT 2014 English-to-German translation task, and 0.41 on the WMT 2014 English-to-French translation task. - -### Examples - -Example where each sample has 1 reference: -```python ->>> predictions = [ -... ["hello", "there", "general", "kenobi", -... ["foo", "bar" "foobar"] -... ] ->>> references = [ -... [["hello", "there", "general", "kenobi"]], -... [["foo", "bar", "foobar"]] -... ] ->>> bleu = datasets.load_metric("bleu") ->>> results = bleu.compute(predictions=predictions, references=references) ->>> print(results) -{'bleu': 0.6370964381207871, 'precisions': [0.8333333333333334, 0.75, 1.0, 1.0], 'brevity_penalty': 0.7165313105737893, 'length_ratio': 0.75, 'translation_length': 6, 'reference_length': 8} -``` - -Example where the second sample has 2 references: -```python ->>> predictions = [ -... ["hello", "there", "general", "kenobi", -... ["foo", "bar" "foobar"] -... ] ->>> references = [ -... [["hello", "there", "general", "kenobi"], ["hello", "there", "!"]], -... [["foo", "bar", "foobar"]] -... ] ->>> bleu = datasets.load_metric("bleu") ->>> results = bleu.compute(predictions=predictions, references=references) ->>> print(results) -{'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6} -``` - -## Limitations and Bias -This metric hase multiple known limitations and biases: -- BLEU compares overlap in tokens from the predictions and references, instead of comparing meaning. This can lead to discrepencies between BLEU scores and human ratings. -- BLEU scores are not comparable across different datasets, nor are they comparable across different languages. -- BLEU scores can vary greatly depending on which parameters are used to generate the scores, especially when different tokenization and normalization techniques are used. It is therefore not possible to compare BLEU scores generated using different parameters, or when these parameters are unknown. -- Shorter predicted translations achieve higher scores than longer ones, simply due to how the score is calculated. A brevity penalty is introduced to attempt to counteract this. - - -## Citation -```bibtex -@INPROCEEDINGS{Papineni02bleu:a, - author = {Kishore Papineni and Salim Roukos and Todd Ward and Wei-jing Zhu}, - title = {BLEU: a Method for Automatic Evaluation of Machine Translation}, - booktitle = {}, - year = {2002}, - pages = {311--318} -} -@inproceedings{lin-och-2004-orange, - title = "{ORANGE}: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation", - author = "Lin, Chin-Yew and - Och, Franz Josef", - booktitle = "{COLING} 2004: Proceedings of the 20th International Conference on Computational Linguistics", - month = "aug 23{--}aug 27", - year = "2004", - address = "Geneva, Switzerland", - publisher = "COLING", - url = "https://www.aclweb.org/anthology/C04-1072", - pages = "501--507", -} -``` - -## Further References -- This Hugging Face implementation uses [this Tensorflow implementation](https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py) diff --git a/spaces/hussain-shk/IndiSent/subword-nmt/get_vocab.py b/spaces/hussain-shk/IndiSent/subword-nmt/get_vocab.py deleted file mode 100644 index 76eb55904a0bf46c32d140848bda384dad584ca6..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/subword-nmt/get_vocab.py +++ /dev/null @@ -1,82 +0,0 @@ -#! /usr/bin/env python -from __future__ import print_function - -import os -import sys -import inspect -import warnings -import argparse -import codecs - -from collections import Counter - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('get-vocab', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="Generates vocabulary") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="Generates vocabulary") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input file (default: standard input).") - - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file (default: standard output)") - - return parser - -def get_vocab(train_file, vocab_file): - - c = Counter() - - for line in train_file: - for word in line.strip('\r\n ').split(' '): - if word: - c[word] += 1 - - for key,f in sorted(c.items(), key=lambda x: x[1], reverse=True): - vocab_file.write(key+" "+ str(f) + "\n") - -if __name__ == "__main__": - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.simplefilter('default') - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) - - parser = create_parser() - args = parser.parse_args() - - # read/write files as UTF-8 - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - get_vocab(args.input, args.output) \ No newline at end of file diff --git a/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/segment_char_ngrams.py b/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/segment_char_ngrams.py deleted file mode 100644 index 8d94bc7a36eb3163271e95e167190d7423564308..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/segment_char_ngrams.py +++ /dev/null @@ -1,95 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -from __future__ import unicode_literals, division - -import sys -import codecs -import argparse - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('segment-char-ngrams', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="segment rare words into character n-grams") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="segment rare words into character n-grams") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input file (default: standard input).") - parser.add_argument( - '--vocab', type=argparse.FileType('r'), metavar='PATH', - required=True, - help="Vocabulary file.") - parser.add_argument( - '--shortlist', type=int, metavar='INT', default=0, - help="do not segment INT most frequent words in vocabulary (default: '%(default)s')).") - parser.add_argument( - '-n', type=int, metavar='INT', default=2, - help="segment rare words into character n-grams of size INT (default: '%(default)s')).") - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file (default: standard output)") - parser.add_argument( - '--separator', '-s', type=str, default='@@', metavar='STR', - help="Separator between non-final subword units (default: '%(default)s'))") - - return parser - -def segment_char_ngrams(args): - - vocab = [line.split()[0] for line in args.vocab if len(line.split()) == 2] - vocab = dict((y,x) for (x,y) in enumerate(vocab)) - - for line in args.input: - for word in line.split(): - if word not in vocab or vocab[word] > args.shortlist: - i = 0 - while i*args.n < len(word): - args.output.write(word[i*args.n:i*args.n+args.n]) - i += 1 - if i*args.n < len(word): - args.output.write(args.separator) - args.output.write(' ') - else: - args.output.write(word + ' ') - args.output.write('\n') - - -if __name__ == '__main__': - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) - - parser = create_parser() - args = parser.parse_args() - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - - # read/write files as UTF-8 - args.vocab = codecs.open(args.vocab.name, encoding='utf-8') - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - segment_char_ngrams(args) \ No newline at end of file diff --git a/spaces/impira/flan-playground/app.py b/spaces/impira/flan-playground/app.py deleted file mode 100644 index bf20fdb1c0f2d772059f8bba9e159168044f6cec..0000000000000000000000000000000000000000 --- a/spaces/impira/flan-playground/app.py +++ /dev/null @@ -1,33 +0,0 @@ -from transformers import pipeline -import gradio as gr - -PIPELINES = {} - - -def build_pipeline(size): - global PIPELINES - if size in PIPELINES: - return PIPELINES[size] - - PIPELINES[size] = pipeline( - "text2text-generation", model=f"google/flan-t5-{size}", max_length=256 - ) - return PIPELINES[size] - - -def greet(input_text, size): - pipe = build_pipeline(size) - return pipe(input_text)[0]["generated_text"] - - -demo = gr.Interface( - fn=greet, - inputs=[ - gr.Textbox(lines=2, placeholder="Enter your task text..."), - gr.Radio(choices=["small", "base", "large", "xl"], value="base"), - ], - outputs=[gr.Textbox(lines=2)], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/inamXcontru/PoeticTTS/Command And Conquer Renegade Windows PC Excellent Game Dna Hack ((FREE)).md b/spaces/inamXcontru/PoeticTTS/Command And Conquer Renegade Windows PC Excellent Game Dna Hack ((FREE)).md deleted file mode 100644 index 73b6e395be20b81bfe0fa9cadd9c3c63b83a854f..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Command And Conquer Renegade Windows PC Excellent Game Dna Hack ((FREE)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    Command And Conquer Renegade Windows PC Excellent Game Dna Hack


    Download ☆☆☆ https://gohhs.com/2uz47F



    -
    - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/innat/VideoSwin/README.md b/spaces/innat/VideoSwin/README.md deleted file mode 100644 index 5de37fa5d6b9357fefc63463a26b6f2ce8f28b76..0000000000000000000000000000000000000000 --- a/spaces/innat/VideoSwin/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: VideoSwin -emoji: 🌍 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.44.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/innnky/nyaru-svc2.0-advanced/app.py b/spaces/innnky/nyaru-svc2.0-advanced/app.py deleted file mode 100644 index 47ce72b525bc91b46ef57e40baa1a26ccd0757e3..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru-svc2.0-advanced/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import gradio as gr -import soundfile -import torch - -import infer_tool - -convert_cnt = [0] -dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_name = "83_epochs.pth" -config_name = "nyarumul.json" -net_g_ms, hubert_soft, feature_input, hps_ms = infer_tool.load_model(f"{model_name}", f"configs/{config_name}") - -# 获取config参数 -target_sample = hps_ms.data.sampling_rate -spk_dict = { - "猫雷2.0": 0, - "云灏": 2, - "即霜": 3, - "奕兰秋": 4 -} - - -def vc_fn(sid, audio_record, audio_upload, tran): - print(sid) - if audio_upload is not None: - audio_path = audio_upload - elif audio_record is not None: - audio_path = audio_record - else: - return "你需要上传wav文件或使用网页内置的录音!", None - - audio, sampling_rate = infer_tool.format_wav(audio_path, target_sample) - duration = audio.shape[0] / sampling_rate - if duration > 60: - return "请上传小于60s的音频,需要转换长音频请使用colab", None - - o_audio, out_sr = infer_tool.infer(audio_path, spk_dict[sid], tran, net_g_ms, hubert_soft, feature_input) - out_path = f"./out_temp.wav" - soundfile.write(out_path, o_audio, target_sample) - infer_tool.f0_plt(audio_path, out_path, tran, hubert_soft, feature_input) - mistake, var = infer_tool.calc_error(audio_path, out_path, tran, feature_input) - return f"半音偏差:{mistake}\n半音方差:{var}", ( - target_sample, o_audio), gr.Image.update("temp.jpg") - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("Basic"): - gr.Markdown(value=""" - 本模型为sovits_f0(含AI猫雷2.0音色),支持**60s以内**的**无伴奏**wav、mp3(单声道)格式,或使用**网页内置**的录音(二选一) - - 转换效果取决于源音频语气、节奏是否与目标音色相近,以及音域是否超出目标音色音域范围 - - 猫雷音色低音音域效果不佳,如转换男声歌声,建议变调升 **6-10key** - - 该模型的 [github仓库链接](https://github.com/innnky/so-vits-svc),如果想自己制作并训练模型可以访问这个 [github仓库](https://github.com/IceKyrin/sovits_guide) - - """) - speaker_id = gr.Dropdown(label="音色", choices=['猫雷2.0', '云灏', '即霜', "奕兰秋"], value="猫雷2.0") - record_input = gr.Audio(source="microphone", label="录制你的声音", type="filepath", elem_id="audio_inputs") - upload_input = gr.Audio(source="upload", label="上传音频(长度小于60秒)", type="filepath", - elem_id="audio_inputs") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - vc_submit = gr.Button("转换", variant="primary") - out_audio = gr.Audio(label="Output Audio") - gr.Markdown(value=""" - 输出信息为音高平均偏差半音数量,体现转换音频的跑调情况(一般平均小于0.5个半音) - """) - out_message = gr.Textbox(label="跑调误差信息") - gr.Markdown(value="""f0曲线可以直观的显示跑调情况,蓝色为输入音高,橙色为合成音频的音高 - - 若**只看见橙色**,说明蓝色曲线被覆盖,转换效果较好 - - """) - f0_image = gr.Image(label="f0曲线") - vc_submit.click(vc_fn, [speaker_id, record_input, upload_input, vc_transform], - [out_message, out_audio, f0_image]) - with gr.TabItem("使用说明"): - gr.Markdown(value=""" - 0、合集:https://github.com/IceKyrin/sovits_guide/blob/main/README.md - - 1、仅支持sovit_f0(sovits2.0)模型 - - 2、自行下载hubert-soft-0d54a1f4.pt改名为hubert.pt放置于pth文件夹下(已经下好了) - https://github.com/bshall/hubert/releases/tag/v0.1 - - 3、pth文件夹下放置sovits2.0的模型 - - 4、与模型配套的xxx.json,需有speaker项——人物列表 - - 5、放无伴奏的音频、或网页内置录音,不要放奇奇怪怪的格式 - - 6、仅供交流使用,不对用户行为负责 - - """) - - app.launch() diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ephere Ornatrix V.6.0.12 For 3DsMax 2013-2019 Fixed HOT!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ephere Ornatrix V.6.0.12 For 3DsMax 2013-2019 Fixed HOT!.md deleted file mode 100644 index 162ddb45e8e80f895f39ac610acdaae2b501dd94..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ephere Ornatrix V.6.0.12 For 3DsMax 2013-2019 Fixed HOT!.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    set up character hair in minutes with intuitive placement and brushing controls. play around with a set of procedural hair operators and presets to refine the style of your groom. use our simulation tools which work tightly with 3dsmax animation pipeline to control the movement of hair. finally, render out your masterpieces using your preferred rendering solution: ornatrix supports all of them.

    -

    Ephere Ornatrix V.6.0.12 For 3DsMax 2013-2019 Fixed


    Download Ziphttps://urlin.us/2uExqP



    -

    free indhan.net windows 98/me/nt/2000/xp version 2.0.. valueplus. ephere ornatrix v.6.12 for 3dsmax 2013-2019 fixed elecard avc.12 for 3dsmax 2015-2019.12 for 3ds max 2013-2019.12 for 3ds max 2013-2019 fixes. ornatrix v.12 for 3dsmax 2013-2019 fixed.

    -

    gemvision matrix 6.0 sr2 rhino 4.0 sr5 (fixed read piratebay ins utorrent.. ephere ornatrix v.6.0.12 for 3dsmax 2013-2019 fixed. 3dsmax 2013-2019 fixedephere ornatrix v.12.123ds max. patch fixed. ornatrix is a system designed to solve the problem of creating hair and hair-shaped structures. to achieve this goal it employs a.

    -

    ephere ornatrix is the application for achieving the ultimate hair simulation of your dreams. ornatrix is a product which is all about achieving perfect hair simulations with using the most current technologies available. the main idea behind ornatrix is to take advantage of the most recent technology available on the market, and the integration of the most well known animation and rendering software like 3dsmax and maya. this leads to making the process as easy and as convenient as possible for the user.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Image Line Sakura _BEST_ Keygen Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Image Line Sakura _BEST_ Keygen Download.md deleted file mode 100644 index 9aed1d5480cc9cf3a2c7232afc93eff91b807058..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Image Line Sakura _BEST_ Keygen Download.md +++ /dev/null @@ -1,12 +0,0 @@ -

    image line sakura keygen download


    Download File 🗸🗸🗸 https://urlin.us/2uEviQ



    -
    -iphone5.Next, we tested if cell death was induced by the miR-520 mimic in the Wnt-responsive cells. Because Wnt3a has been reported to induce both apoptosis and necrosis,[@R28]^,^[@R29] we tested whether Wnt3a could induce cell death in transfected cells. We detected apoptosis by using caspase-3 activation and by measuring the level of cleaved poly ADP ribose polymerase (PARP). The level of caspase-3 activation in cells transfected with miR-520 mimic was lower than that in the negative control cells ([Fig. 3D](#F3)ref-type="fig"). In contrast, cleaved PARP levels were higher in the miR-520 mimic-transfected cells than in the negative control cells, suggesting that the miR-520 mimic induces cell death. These findings indicate that miR-520 overexpression may lead to cell death and apoptosis via regulation of β-catenin/TCF4 signaling. - -Role of miR-520 overexpression in breast cancer cells - ------------------------------------------------------ - -Because increased miR-520 expression is frequently observed in cancer cells, we examined whether miR-520 directly regulates β-catenin/TCF4 signaling in cancer cells. Initially, we assessed the level of β-catenin and TCF4 in miR-520-transfected cells. Transfection with miR-520 mimic reduced the levels of β-catenin and TCF4 in HEK293T and MDA-MB-231 cells ([Fig. 4A](#F4)ref-type="fig"). Next, we determined the role of miR-520 overexpression in the cell growth of breast cancer cells. The viability of breast cancer cells was reduced by miR-520 overexpression ([Fig. 4B](#F4)ref-type="fig"). We also determined whether the cell cycle distribution of breast cancer cells was affected by miR-520 overexpression. miR-520 overexpression led to a decreased proportion of breast cancer cells in the S phase ([Fig. 4C](#F4)ref-type="fig"). To further investigate the role of miR-520 overexpression in breast cancer cells, we performed cell invasion assays. miR-520 overe 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Active Boot Disk V15.0.6 Full ISO Version [Latest] LINK.md b/spaces/inreVtussa/clothingai/Examples/Active Boot Disk V15.0.6 Full ISO Version [Latest] LINK.md deleted file mode 100644 index 4beda76da0df53f9cefac1b385d1eda3b876d5e8..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Active Boot Disk V15.0.6 Full ISO Version [Latest] LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Active Boot Disk v15.0.6 Full ISO Version [Latest]


    DOWNLOAD ✓✓✓ https://tiurll.com/2uCjXn



    -
    -Beritahu saya akan tulisan baru melalui surel. Support Us. Latest Updates. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Astm A123 Pdf Free Download.md b/spaces/inreVtussa/clothingai/Examples/Astm A123 Pdf Free Download.md deleted file mode 100644 index 98cbb55e301ee0eaabbb913a4ae5c84503f2c317..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Astm A123 Pdf Free Download.md +++ /dev/null @@ -1,15 +0,0 @@ -
    -

    How to Download ASTM A123 PDF for Free

    -

    ASTM A123 is a standard specification for zinc coating (galvanizing) by the hot-dip process on iron and steel products. It covers both unfabricated and fabricated products, such as structural steel, castings, bars, strips, and wire. It also specifies the requirements for coating thickness, appearance, adherence, and testing.

    -

    If you are looking for a free download of ASTM A123 PDF, you may find some online sources that claim to offer it. However, be aware that these sources may not be authorized or reliable, and may contain viruses or malware. Moreover, downloading ASTM standards without proper permission may violate the intellectual property rights of ASTM International.

    -

    astm a123 pdf free download


    Download File ……… https://tiurll.com/2uCiXA



    -

    The best way to access ASTM A123 PDF is to purchase it from the official website of ASTM International (https://www.astm.org). You can either buy a single copy or subscribe to a package that includes multiple standards. By purchasing from ASTM International, you can ensure that you get the most current and accurate version of the standard, and that you support the development and maintenance of voluntary consensus standards.

    -

    Alternatively, you can also access ASTM A123 PDF through some online libraries or databases that have a subscription or partnership with ASTM International. For example, you can use the Engineering Workbench platform (https://ihsmarkit.com/products/engineering-workbench.html) or the ASTM Compass portal (https://compass.astm.org) to search and view ASTM standards online. However, you may need to register or pay a fee to use these services.

    -

    In summary, ASTM A123 is an important standard for galvanizing iron and steel products. You can download it as a PDF file from the official website of ASTM International or from some online libraries or databases that have a license from ASTM International. You should avoid downloading it from unauthorized or dubious sources that may harm your computer or infringe on the rights of ASTM International.

    - -

    One of the main benefits of galvanizing iron and steel products is that it provides a durable and corrosion-resistant coating that protects the base metal from environmental factors. Galvanized products can last for decades without requiring painting or maintenance, and can withstand harsh conditions such as salt spray, humidity, and abrasion. Galvanizing also enhances the aesthetic appeal of iron and steel products, giving them a bright and uniform appearance.

    -

    However, galvanizing also has some limitations and challenges that need to be considered. For example, some iron and steel products may not be suitable for galvanizing due to their size, shape, or composition. Galvanizing may also cause some changes in the mechanical properties of the base metal, such as increased brittleness or reduced weldability. Moreover, galvanizing may have some environmental impacts, such as generating waste water and emissions from the zinc bath and the heating process.

    -

    Therefore, it is important to follow the guidelines and specifications of ASTM A123 when galvanizing iron and steel products. ASTM A123 provides the minimum requirements for coating quality and performance, as well as the methods for inspection and testing. By adhering to ASTM A123, galvanizers and users can ensure that they produce and receive high-quality galvanized products that meet their expectations and needs.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/jackyccl/segment-anything/groundingdino/util/time_counter.py b/spaces/jackyccl/segment-anything/groundingdino/util/time_counter.py deleted file mode 100644 index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/groundingdino/util/time_counter.py +++ /dev/null @@ -1,62 +0,0 @@ -import json -import time - - -class TimeCounter: - def __init__(self) -> None: - pass - - def clear(self): - self.timedict = {} - self.basetime = time.perf_counter() - - def timeit(self, name): - nowtime = time.perf_counter() - self.basetime - self.timedict[name] = nowtime - self.basetime = time.perf_counter() - - -class TimeHolder: - def __init__(self) -> None: - self.timedict = {} - - def update(self, _timedict: dict): - for k, v in _timedict.items(): - if k not in self.timedict: - self.timedict[k] = AverageMeter(name=k, val_only=True) - self.timedict[k].update(val=v) - - def final_res(self): - return {k: v.avg for k, v in self.timedict.items()} - - def __str__(self): - return json.dumps(self.final_res(), indent=2) - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self, name, fmt=":f", val_only=False): - self.name = name - self.fmt = fmt - self.val_only = val_only - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - if self.val_only: - fmtstr = "{name} {val" + self.fmt + "}" - else: - fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})" - return fmtstr.format(**self.__dict__) diff --git a/spaces/jbilcke-hf/MusicGen/audiocraft/modules/transformer.py b/spaces/jbilcke-hf/MusicGen/audiocraft/modules/transformer.py deleted file mode 100644 index e69cca829d774d0b8b36c0de9b7924373da81b43..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', 'Rope not supported with torch attn.' - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/app/data/data.ts b/spaces/jbilcke-hf/VideoChain-UI/src/app/data/data.ts deleted file mode 100644 index 0a2103f1d8d963f347a5d7bba429e9f74ba0ceb5..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/app/data/data.ts +++ /dev/null @@ -1,71 +0,0 @@ -import { - ArrowDownIcon, - ArrowRightIcon, - ArrowUpIcon, - CheckCircledIcon, - CircleIcon, - CrossCircledIcon, - QuestionMarkCircledIcon, - StopwatchIcon, -} from "@radix-ui/react-icons" - -export const labels = [ - { - value: "bug", - label: "Bug", - }, - { - value: "feature", - label: "Feature", - }, - { - value: "documentation", - label: "Documentation", - }, -] - -export const statuses = [ - { - value: "backlog", - label: "Backlog", - icon: QuestionMarkCircledIcon, - }, - { - value: "todo", - label: "Todo", - icon: CircleIcon, - }, - { - value: "in progress", - label: "In Progress", - icon: StopwatchIcon, - }, - { - value: "done", - label: "Done", - icon: CheckCircledIcon, - }, - { - value: "canceled", - label: "Canceled", - icon: CrossCircledIcon, - }, -] - -export const priorities = [ - { - label: "Low", - value: "low", - icon: ArrowDownIcon, - }, - { - label: "Medium", - value: "medium", - icon: ArrowRightIcon, - }, - { - label: "High", - value: "high", - icon: ArrowUpIcon, - }, -] \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/toast.tsx b/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/toast.tsx deleted file mode 100644 index 94b1e9a1d3a82fe1beea6e931c4887e2260371cd..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/toast.tsx +++ /dev/null @@ -1,127 +0,0 @@ -import * as React from "react" -import * as ToastPrimitives from "@radix-ui/react-toast" -import { cva, type VariantProps } from "class-variance-authority" -import { X } from "lucide-react" - -import { cn } from "@/lib/utils" - -const ToastProvider = ToastPrimitives.Provider - -const ToastViewport = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -ToastViewport.displayName = ToastPrimitives.Viewport.displayName - -const toastVariants = cva( - "group pointer-events-auto relative flex w-full items-center justify-between space-x-4 overflow-hidden rounded-md border border-stone-200 p-6 pr-8 shadow-lg transition-all data-[swipe=cancel]:translate-x-0 data-[swipe=end]:translate-x-[var(--radix-toast-swipe-end-x)] data-[swipe=move]:translate-x-[var(--radix-toast-swipe-move-x)] data-[swipe=move]:transition-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[swipe=end]:animate-out data-[state=closed]:fade-out-80 data-[state=closed]:slide-out-to-right-full data-[state=open]:slide-in-from-top-full data-[state=open]:sm:slide-in-from-bottom-full dark:border-stone-800", - { - variants: { - variant: { - default: "border bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50", - destructive: - "destructive group border-red-500 bg-red-500 text-stone-50 dark:border-red-900 dark:bg-red-900 dark:text-stone-50", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -const Toast = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & - VariantProps ->(({ className, variant, ...props }, ref) => { - return ( - - ) -}) -Toast.displayName = ToastPrimitives.Root.displayName - -const ToastAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -ToastAction.displayName = ToastPrimitives.Action.displayName - -const ToastClose = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - -)) -ToastClose.displayName = ToastPrimitives.Close.displayName - -const ToastTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -ToastTitle.displayName = ToastPrimitives.Title.displayName - -const ToastDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -ToastDescription.displayName = ToastPrimitives.Description.displayName - -type ToastProps = React.ComponentPropsWithoutRef - -type ToastActionElement = React.ReactElement - -export { - type ToastProps, - type ToastActionElement, - ToastProvider, - ToastViewport, - Toast, - ToastTitle, - ToastDescription, - ToastClose, - ToastAction, -} diff --git a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/config.py b/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/config.py deleted file mode 100644 index 5cff8d687edb235b72ad626bc9ca1b9ab88a7338..0000000000000000000000000000000000000000 --- a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/config.py +++ /dev/null @@ -1,169 +0,0 @@ -import os - -CHARPOINT_CONVERSION_DICT = { - "": "leeg", - "101_Q19_2": "buitenkruin", - "101_Q19_3": "binnenkruin", - "101_Q19_5": "binnenteen", - "105_T09_11": "insteek_sloot", - "811_T13_8": "leeg", - "351_T03_10": "leeg", - "_T01_KKW": "leeg", - "108_Q06_250": "leeg", - "303_Q05_1": "leeg", - "353__11": "leeg", - "_T00_17": "leeg", - "109_Q08_13": "leeg", - "_Q07_KDM": "leeg", - "_Q07_KDW": "leeg", - '0': "leeg", - None: "leeg", - 'nan': "leeg" -} - -CLASS_DICT_REGIONAL = { - "leeg": 0, - "startpunt": 1, - "buitenkruin": 2, - "binnenkruin": 3, - "binnenteen": 4, - "insteek_sloot": 5 -} - -WEIGHT_DICT_REGIONAL = [0.1, 1.0, 1.1, 1.0, 0.1] - -CLASS_DICT_FULL = { - 'leeg': 0, - 'Maaiveld binnenwaarts': 1, - 'Insteek sloot polderzijde': 2, - 'Slootbodem polderzijde': 3, - 'Slootbodem dijkzijde': 4, - 'Insteek sloot dijkzijde': 5, - 'Teen dijk binnenwaarts': 6, - 'Kruin binnenberm': 7, - 'Insteek binnenberm': 8, - 'Kruin binnentalud': 9, - 'Verkeersbelasting kant binnenwaarts': 9, # 10 - 'Verkeersbelasting kant buitenwaarts': 10, - 'Kruin buitentalud': 10, # 12 - 'Insteek buitenberm': 11, - 'Kruin buitenberm': 12, - 'Teen dijk buitenwaarts': 13, - 'Insteek geul': 14, - 'Teen geul': 15, - 'Maaiveld buitenwaarts': 16, - } - -# TODO: write this out explicitely -WEIGHT_DICT_FULL = [1.0] * 17 - -CLASS_DICT_SIMPLE = { - 'leeg': 0, - 'Maaiveld buitenwaarts': 1, - 'Teen dijk buitenwaarts': 2, - 'Kruin buitentalud': 3, - 'Kruin binnentalud': 4, - 'Teen dijk binnenwaarts': 5, -} - -WEIGHT_DICT_SIMPLE = [0.1, 0.5, 0.7, 1.0, 1.0, 0.5] - -CLASS_DICT_SIMPLE_SLOOT = { - 'leeg': 0, - 'Maaiveld buitenwaarts': 1, - 'Teen dijk buitenwaarts': 2, - 'Kruin buitentalud': 3, - 'Kruin binnentalud': 4, - 'Teen dijk binnenwaarts': 5, - 'Insteek sloot dijkzijde': 6, - 'Insteek sloot polderzijde': 7, - 'Slootbodem polderzijde': 8, - 'Slootbodem dijkzijde': 9, -} - -WEIGHT_DICT_SIMPLE_SLOOT = [0.1, 0.1, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.1] - -CLASS_DICT_SIMPLE_BERM = { - 'leeg': 0, - 'Maaiveld buitenwaarts': 1, - 'Teen dijk buitenwaarts': 2, - 'Kruin buitentalud': 3, - 'Kruin binnentalud': 4, - 'Teen dijk binnenwaarts': 5, - 'Insteek sloot dijkzijde': 6, - 'Insteek sloot polderzijde': 7, - 'Slootbodem polderzijde': 8, - 'Slootbodem dijkzijde': 9, - 'Kruin binnenberm': 10, - 'Insteek binnenberm': 11, -} -WEIGHT_DICT_SIMPLE_BERM = [0.1, 0.1, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.1] - -HEADER = ["LOCATIONID", - "X_Maaiveld binnenwaarts", - "Y_Maaiveld binnenwaarts", - "Z_Maaiveld binnenwaarts", - "X_Insteek sloot polderzijde", - "Y_Insteek sloot polderzijde", - "Z_Insteek sloot polderzijde", - "X_Slootbodem polderzijde", - "Y_Slootbodem polderzijde", - "Z_Slootbodem polderzijde", - "X_Slootbodem dijkzijde", - "Y_Slootbodem dijkzijde", - "Z_Slootbodem dijkzijde", - "X_Insteek sloot dijkzijde", - "Y_Insteek sloot dijkzijde", - "Z_Insteek sloot dijkzijde", - "X_Teen dijk binnenwaarts", - "Y_Teen dijk binnenwaarts", - "Z_Teen dijk binnenwaarts", - "X_Kruin binnenberm", - "Y_Kruin binnenberm", - "Z_Kruin binnenberm", - "X_Insteek binnenberm", - "Y_Insteek binnenberm", - "Z_Insteek binnenberm", - "X_Kruin binnentalud", - "Y_Kruin binnentalud", - "Z_Kruin binnentalud", - "X_Verkeersbelasting kant binnenwaarts", - "Y_Verkeersbelasting kant binnenwaarts", - "Z_Verkeersbelasting kant binnenwaarts", - "X_Verkeersbelasting kant buitenwaarts", - "Y_Verkeersbelasting kant buitenwaarts", - "Z_Verkeersbelasting kant buitenwaarts", - "X_Kruin buitentalud", - "Y_Kruin buitentalud", - "Z_Kruin buitentalud", - "X_Insteek buitenberm", - "Y_Insteek buitenberm", - "Z_Insteek buitenberm", - "X_Kruin buitenberm", - "Y_Kruin buitenberm", - "Z_Kruin buitenberm", - "X_Teen dijk buitenwaarts", - "Y_Teen dijk buitenwaarts", - "Z_Teen dijk buitenwaarts", - "X_Insteek geul", - "Y_Insteek geul", - "Z_Insteek geul", - "X_Teen geul", - "Y_Teen geul", - "Z_Teen geul", - "X_Maaiveld buitenwaarts", - "Y_Maaiveld buitenwaarts", - "Z_Maaiveld buitenwaarts"] - -SCALER_PATH = os.path.join("data", "trained_models", "scaler.pik") -MODEL_PATH = os.path.join('data', 'trained_models', 'dijknet_simple_95.pt') - -INVERSE_CLASS_DICT_FULL = {v: k for k, v in CLASS_DICT_FULL.items()} -INVERSE_CLASS_DICT_SIMPLE = {v: k for k, v in CLASS_DICT_SIMPLE.items()} -INVERSE_CLASS_DICT_SIMPLE_BERM = {v: k for k, v in CLASS_DICT_SIMPLE_BERM.items()} -INVERSE_CLASS_DICT_SIMPLE_SLOOT = {v: k for k, v in CLASS_DICT_SIMPLE_SLOOT.items()} -INVERSE_CLASS_DICT_REGIONAL = {v: k for k, v in CLASS_DICT_REGIONAL.items()} - -# manual mappings to get the correct names for plotting later -if 11 in INVERSE_CLASS_DICT_FULL: - INVERSE_CLASS_DICT_FULL[10] = 'Kruin buitentalud' diff --git a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/data/sampler.py b/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/data/sampler.py deleted file mode 100644 index 62a9a43bd1d4c21fbdcb262db7da8d4fe27b26de..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/data/sampler.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch - - -class Sampler(object): - """Base class for all Samplers. - - Every Sampler subclass has to provide an __iter__ method, providing a way - to iterate over indices of dataset elements, and a __len__ method that - returns the length of the returned iterators. - """ - - def __init__(self, data_source): - pass - - def __iter__(self): - raise NotImplementedError - - def __len__(self): - raise NotImplementedError - - -class SequentialSampler(Sampler): - """Samples elements sequentially, always in the same order. - - Arguments: - data_source (Dataset): dataset to sample from - """ - - def __init__(self, data_source): - self.data_source = data_source - - def __iter__(self): - return iter(range(len(self.data_source))) - - def __len__(self): - return len(self.data_source) - - -class RandomSampler(Sampler): - """Samples elements randomly, without replacement. - - Arguments: - data_source (Dataset): dataset to sample from - """ - - def __init__(self, data_source): - self.data_source = data_source - - def __iter__(self): - return iter(torch.randperm(len(self.data_source)).long()) - - def __len__(self): - return len(self.data_source) - - -class SubsetRandomSampler(Sampler): - """Samples elements randomly from a given list of indices, without replacement. - - Arguments: - indices (list): a list of indices - """ - - def __init__(self, indices): - self.indices = indices - - def __iter__(self): - return (self.indices[i] for i in torch.randperm(len(self.indices))) - - def __len__(self): - return len(self.indices) - - -class WeightedRandomSampler(Sampler): - """Samples elements from [0,..,len(weights)-1] with given probabilities (weights). - - Arguments: - weights (list) : a list of weights, not necessary summing up to one - num_samples (int): number of samples to draw - replacement (bool): if ``True``, samples are drawn with replacement. - If not, they are drawn without replacement, which means that when a - sample index is drawn for a row, it cannot be drawn again for that row. - """ - - def __init__(self, weights, num_samples, replacement=True): - self.weights = torch.DoubleTensor(weights) - self.num_samples = num_samples - self.replacement = replacement - - def __iter__(self): - return iter(torch.multinomial(self.weights, self.num_samples, self.replacement)) - - def __len__(self): - return self.num_samples - - -class BatchSampler(object): - """Wraps another sampler to yield a mini-batch of indices. - - Args: - sampler (Sampler): Base sampler. - batch_size (int): Size of mini-batch. - drop_last (bool): If ``True``, the sampler will drop the last batch if - its size would be less than ``batch_size`` - - Example: - >>> list(BatchSampler(range(10), batch_size=3, drop_last=False)) - [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] - >>> list(BatchSampler(range(10), batch_size=3, drop_last=True)) - [[0, 1, 2], [3, 4, 5], [6, 7, 8]] - """ - - def __init__(self, sampler, batch_size, drop_last): - self.sampler = sampler - self.batch_size = batch_size - self.drop_last = drop_last - - def __iter__(self): - batch = [] - for idx in self.sampler: - batch.append(idx) - if len(batch) == self.batch_size: - yield batch - batch = [] - if len(batch) > 0 and not self.drop_last: - yield batch - - def __len__(self): - if self.drop_last: - return len(self.sampler) // self.batch_size - else: - return (len(self.sampler) + self.batch_size - 1) // self.batch_size diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/masks/countless/countless3d.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/masks/countless/countless3d.py deleted file mode 100644 index 810a71e4b1fa344dd2d731186516dbfa96c9cd03..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/masks/countless/countless3d.py +++ /dev/null @@ -1,356 +0,0 @@ -from six.moves import range -from PIL import Image -import numpy as np -import io -import time -import math -import random -import sys -from collections import defaultdict -from copy import deepcopy -from itertools import combinations -from functools import reduce -from tqdm import tqdm - -from memory_profiler import profile - -def countless5(a,b,c,d,e): - """First stage of generalizing from countless2d. - - You have five slots: A, B, C, D, E - - You can decide if something is the winner by first checking for - matches of three, then matches of two, then picking just one if - the other two tries fail. In countless2d, you just check for matches - of two and then pick one of them otherwise. - - Unfortunately, you need to check ABC, ABD, ABE, BCD, BDE, & CDE. - Then you need to check AB, AC, AD, BC, BD - We skip checking E because if none of these match, we pick E. We can - skip checking AE, BE, CE, DE since if any of those match, E is our boy - so it's redundant. - - So countless grows cominatorially in complexity. - """ - sections = [ a,b,c,d,e ] - - p2 = lambda q,r: q * (q == r) # q if p == q else 0 - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) # q if q == r == s else 0 - - lor = lambda x,y: x + (x == 0) * y - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - return reduce(lor, (results3, results2, e)) - -def countless8(a,b,c,d,e,f,g,h): - """Extend countless5 to countless8. Same deal, except we also - need to check for matches of length 4.""" - sections = [ a, b, c, d, e, f, g, h ] - - p2 = lambda q,r: q * (q == r) - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) - p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) ) - - lor = lambda x,y: x + (x == 0) * y - - results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) ) - results4 = reduce(lor, results4) - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - # We can always use our shortcut of omitting the last element - # for N choose 2 - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - return reduce(lor, [ results4, results3, results2, h ]) - -def dynamic_countless3d(data): - """countless8 + dynamic programming. ~2x faster""" - sections = [] - - # shift zeros up one so they don't interfere with bitwise operators - # we'll shift down at the end - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - pick = lambda a,b: a * (a == b) - lor = lambda x,y: x + (x == 0) * y - - subproblems2 = {} - - results2 = None - for x,y in combinations(range(7), 2): - res = pick(sections[x], sections[y]) - subproblems2[(x,y)] = res - if results2 is not None: - results2 += (results2 == 0) * res - else: - results2 = res - - subproblems3 = {} - - results3 = None - for x,y,z in combinations(range(8), 3): - res = pick(subproblems2[(x,y)], sections[z]) - - if z != 7: - subproblems3[(x,y,z)] = res - - if results3 is not None: - results3 += (results3 == 0) * res - else: - results3 = res - - results3 = reduce(lor, (results3, results2, sections[-1])) - - # free memory - results2 = None - subproblems2 = None - res = None - - results4 = ( pick(subproblems3[(x,y,z)], sections[w]) for x,y,z,w in combinations(range(8), 4) ) - results4 = reduce(lor, results4) - subproblems3 = None # free memory - - final_result = lor(results4, results3) - 1 - data -= 1 - return final_result - -def countless3d(data): - """Now write countless8 in such a way that it could be used - to process an image.""" - sections = [] - - # shift zeros up one so they don't interfere with bitwise operators - # we'll shift down at the end - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - p2 = lambda q,r: q * (q == r) - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) - p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) ) - - lor = lambda x,y: x + (x == 0) * y - - results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) ) - results4 = reduce(lor, results4) - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - final_result = reduce(lor, (results4, results3, results2, sections[-1])) - 1 - data -= 1 - return final_result - -def countless_generalized(data, factor): - assert len(data.shape) == len(factor) - - sections = [] - - mode_of = reduce(lambda x,y: x * y, factor) - majority = int(math.ceil(float(mode_of) / 2)) - - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - def pick(elements): - eq = ( elements[i] == elements[i+1] for i in range(len(elements) - 1) ) - anded = reduce(lambda p,q: p & q, eq) - return elements[0] * anded - - def logical_or(x,y): - return x + (x == 0) * y - - result = ( pick(combo) for combo in combinations(sections, majority) ) - result = reduce(logical_or, result) - for i in range(majority - 1, 3-1, -1): # 3-1 b/c of exclusive bounds - partial_result = ( pick(combo) for combo in combinations(sections, i) ) - partial_result = reduce(logical_or, partial_result) - result = logical_or(result, partial_result) - - partial_result = ( pick(combo) for combo in combinations(sections[:-1], 2) ) - partial_result = reduce(logical_or, partial_result) - result = logical_or(result, partial_result) - - result = logical_or(result, sections[-1]) - 1 - data -= 1 - return result - -def dynamic_countless_generalized(data, factor): - assert len(data.shape) == len(factor) - - sections = [] - - mode_of = reduce(lambda x,y: x * y, factor) - majority = int(math.ceil(float(mode_of) / 2)) - - data += 1 # offset from zero - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - pick = lambda a,b: a * (a == b) - lor = lambda x,y: x + (x == 0) * y # logical or - - subproblems = [ {}, {} ] - results2 = None - for x,y in combinations(range(len(sections) - 1), 2): - res = pick(sections[x], sections[y]) - subproblems[0][(x,y)] = res - if results2 is not None: - results2 = lor(results2, res) - else: - results2 = res - - results = [ results2 ] - for r in range(3, majority+1): - r_results = None - for combo in combinations(range(len(sections)), r): - res = pick(subproblems[0][combo[:-1]], sections[combo[-1]]) - - if combo[-1] != len(sections) - 1: - subproblems[1][combo] = res - - if r_results is not None: - r_results = lor(r_results, res) - else: - r_results = res - results.append(r_results) - subproblems[0] = subproblems[1] - subproblems[1] = {} - - results.reverse() - final_result = lor(reduce(lor, results), sections[-1]) - 1 - data -= 1 - return final_result - -def downsample_with_averaging(array): - """ - Downsample x by factor using averaging. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2,2) - - if np.array_equal(factor[:3], np.array([1,1,1])): - return array - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor)) - temp = np.zeros(output_shape, float) - counts = np.zeros(output_shape, np.int) - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - indexing_expr = tuple(np.s_[:s] for s in part.shape) - temp[indexing_expr] += part - counts[indexing_expr] += 1 - return np.cast[array.dtype](temp / counts) - -def downsample_with_max_pooling(array): - - factor = (2,2,2) - - sections = [] - - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - output = sections[0].copy() - - for section in sections[1:]: - np.maximum(output, section, output) - - return output - -def striding(array): - """Downsample x by factor using striding. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2,2) - if np.all(np.array(factor, int) == 1): - return array - return array[tuple(np.s_[::f] for f in factor)] - -def benchmark(): - def countless3d_generalized(img): - return countless_generalized(img, (2,8,1)) - def countless3d_dynamic_generalized(img): - return dynamic_countless_generalized(img, (8,8,1)) - - methods = [ - # countless3d, - # dynamic_countless3d, - countless3d_generalized, - # countless3d_dynamic_generalized, - # striding, - # downsample_with_averaging, - # downsample_with_max_pooling - ] - - data = np.zeros(shape=(16**2, 16**2, 16**2), dtype=np.uint8) + 1 - - N = 5 - - print('Algorithm\tMPx\tMB/sec\tSec\tN=%d' % N) - - for fn in methods: - start = time.time() - for _ in range(N): - result = fn(data) - end = time.time() - - total_time = (end - start) - mpx = N * float(data.shape[0] * data.shape[1] * data.shape[2]) / total_time / 1024.0 / 1024.0 - mbytes = mpx * np.dtype(data.dtype).itemsize - # Output in tab separated format to enable copy-paste into excel/numbers - print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time)) - -if __name__ == '__main__': - benchmark() - -# Algorithm MPx MB/sec Sec N=5 -# countless3d 10.564 10.564 60.58 -# dynamic_countless3d 22.717 22.717 28.17 -# countless3d_generalized 9.702 9.702 65.96 -# countless3d_dynamic_generalized 22.720 22.720 28.17 -# striding 253360.506 253360.506 0.00 -# downsample_with_averaging 224.098 224.098 2.86 -# downsample_with_max_pooling 690.474 690.474 0.93 - - - diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py deleted file mode 100644 index 5fd3bb992e51b54225d53edb5f8e50f575997f81..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiofiles/threadpool/utils.py +++ /dev/null @@ -1,72 +0,0 @@ -import functools - - -def delegate_to_executor(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_delegate_method(attr_name)) - return cls - - return cls_builder - - -def proxy_method_directly(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_proxy_method(attr_name)) - return cls - - return cls_builder - - -def proxy_property_directly(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_proxy_property(attr_name)) - return cls - - return cls_builder - - -def cond_delegate_to_executor(*attrs): - def cls_builder(cls): - for attr_name in attrs: - setattr(cls, attr_name, _make_cond_delegate_method(attr_name)) - return cls - - return cls_builder - - -def _make_delegate_method(attr_name): - async def method(self, *args, **kwargs): - cb = functools.partial(getattr(self._file, attr_name), *args, **kwargs) - return await self._loop.run_in_executor(self._executor, cb) - - return method - - -def _make_proxy_method(attr_name): - def method(self, *args, **kwargs): - return getattr(self._file, attr_name)(*args, **kwargs) - - return method - - -def _make_proxy_property(attr_name): - def proxy_property(self): - return getattr(self._file, attr_name) - - return property(proxy_property) - - -def _make_cond_delegate_method(attr_name): - """For spooled temp files, delegate only if rolled to file object""" - - async def method(self, *args, **kwargs): - if self._file._rolled: - cb = functools.partial(getattr(self._file, attr_name), *args, **kwargs) - return await self._loop.run_in_executor(self._executor, cb) - else: - return getattr(self._file, attr_name)(*args, **kwargs) - - return method diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/voltLib/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/voltLib/__init__.py deleted file mode 100644 index 886aa3a7864523656e609dd602683d73f985f467..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/voltLib/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -"""fontTools.voltLib -- a package for dealing with Visual OpenType Layout Tool -(VOLT) files.""" - -# See -# http://www.microsoft.com/typography/VOLT.mspx diff --git a/spaces/jofaichow/shiny-numerati/Dockerfile b/spaces/jofaichow/shiny-numerati/Dockerfile deleted file mode 100644 index b0aeede200aa3168d059d1407b9f707d39e7983b..0000000000000000000000000000000000000000 --- a/spaces/jofaichow/shiny-numerati/Dockerfile +++ /dev/null @@ -1,23 +0,0 @@ -FROM rocker/r-ver:4.2.3 - -# basic shiny functionality -RUN R -q -e "install.packages(c('shiny', 'rmarkdown', 'markdown'))" - -# additional shiny functionality -RUN R -q -e "install.packages(c('shinydashboard', 'shinydashboardPlus'))" -RUN R -q -e "install.packages(c('shinyWidgets', 'shinycssloaders'))" - -# other R packages -RUN R -q -e "install.packages(c('DT', 'plotly', 'scico', 'ggthemes', 'scales', 'wesanderson'))" -RUN R -q -e "install.packages(c('data.table', 'dtplyr', 'devtools'))" - -# modified version of Rnumerai -RUN R -q -e "devtools::install_github('woobe/Rnumerai')" - -# copy the app to the image -WORKDIR /shinyapp -COPY --link Rprofile.site /usr/local/lib/R/etc/ -COPY --link app /shinyapp/ - -EXPOSE 7860 -CMD ["R", "-q", "-e", "shiny::runApp('/shinyapp')"] diff --git a/spaces/jone/GFPGAN/gfpgan/data/__init__.py b/spaces/jone/GFPGAN/gfpgan/data/__init__.py deleted file mode 100644 index 69fd9f9026407c4d185f86b122000485b06fd986..0000000000000000000000000000000000000000 --- a/spaces/jone/GFPGAN/gfpgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'gfpgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/jone/Music_Source_Separation/bytesep/dataset_creation/create_indexes/__init__.py b/spaces/jone/Music_Source_Separation/bytesep/dataset_creation/create_indexes/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jordonpeter01/ai-comic-factory/src/app/layouts/index.tsx b/spaces/jordonpeter01/ai-comic-factory/src/app/layouts/index.tsx deleted file mode 100644 index 4553783fbfdd3636bf311ddc8661ea13e585b61a..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/app/layouts/index.tsx +++ /dev/null @@ -1,287 +0,0 @@ -"use client" - -import { Panel } from "@/app/interface/panel" -import { pick } from "@/lib/pick" -import { Grid } from "@/app/interface/grid" - -export function Layout0() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout1() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout2_todo() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout3_todo() { - return ( - -
    - -
    -
    - -
    -
    -
    - -
    -
    - -
    -
    -
    - ) -} - -export function Layout4_todo() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - - -export function Layout2() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -export function Layout3() { - return ( - -
    - -
    -
    - -
    -
    - -
    -
    - -
    -
    - ) -} - -// export const layouts = { Layout1, Layout2_todo, Layout3_todo, Layout4_todo, Layout2, Layout3 } -export const allLayouts = { - random: <>, - Layout0, - Layout1, - Layout2, - Layout3 -} - -export const allLayoutLabels = { - random: "Random layout", - Layout0: "Layout 0", - Layout1: "Layout 1", - Layout2: "Layout 2", - Layout3: "Layout 3", -} - -export type LayoutName = keyof typeof allLayouts - -export const defaultLayout: LayoutName = "Layout1" - -export type LayoutCategory = "square" | "fluid" - -export const nonRandomLayouts = Object.keys(allLayouts).filter(layout => layout !== "random") - -export const getRandomLayoutName = (): LayoutName => { - return pick(nonRandomLayouts) as LayoutName -} - -export function getRandomLayoutNames(): LayoutName[] { - return nonRandomLayouts.sort(() => Math.random() - 0.5) as LayoutName[] -} - diff --git a/spaces/jpwahle/field-time-diversity/metrics.py b/spaces/jpwahle/field-time-diversity/metrics.py deleted file mode 100644 index f12398092baf18a0e632c58670ec3ccee34fbd67..0000000000000000000000000000000000000000 --- a/spaces/jpwahle/field-time-diversity/metrics.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright 2023 by Jan Philip Wahle, https://jpwahle.com/ -# All rights reserved. - -import numpy as np - - -def calculate_gini_simpson(dictionary): - """ - Function to Calculate Gini Simpson's Diversity Index - """ - total = sum(dictionary.values()) - sum_squares = sum((n / total) ** 2 for n in dictionary.values()) - return 1 - sum_squares - - -def calculate_gini(frequencies): - """ - Function to Calculate Gini's Diversity Index - """ - frequencies = np.array(frequencies) - if len(frequencies) == 0 or np.mean(frequencies) == 0: - return None - total = sum( - np.sum(np.abs(xi - frequencies[i:])) - for i, xi in enumerate(frequencies[:-1], 1) - ) - return total / (len(frequencies) ** 2 * np.mean(frequencies)) diff --git a/spaces/justest/gpt4free/g4f/.v1/gpt4free/gptworldAi/__init__.py b/spaces/justest/gpt4free/g4f/.v1/gpt4free/gptworldAi/__init__.py deleted file mode 100644 index e7f76c61209fabf224698949764155ac53cc7a6b..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/.v1/gpt4free/gptworldAi/__init__.py +++ /dev/null @@ -1,105 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/23 13:37 -@Auth : Hp_mzx -@File :__init__.py.py -@IDE :PyCharm -""" -import json -import uuid -import random -import binascii -import requests -import Crypto.Cipher.AES as AES -from fake_useragent import UserAgent - -class ChatCompletion: - @staticmethod - def create(messages:[],proxy: str = None): - url = "https://chat.getgpt.world/api/chat/stream" - headers = { - "Content-Type": "application/json", - "Referer": "https://chat.getgpt.world/", - 'user-agent': UserAgent().random, - } - proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else None - data = json.dumps({ - "messages": messages, - "frequency_penalty": 0, - "max_tokens": 4000, - "model": "gpt-3.5-turbo", - "presence_penalty": 0, - "temperature": 1, - "top_p": 1, - "stream": True, - "uuid": str(uuid.uuid4()) - }) - signature = ChatCompletion.encrypt(data) - res = requests.post(url, headers=headers, data=json.dumps({"signature": signature}), proxies=proxies,stream=True) - for chunk in res.iter_content(chunk_size=None): - res.raise_for_status() - datas = chunk.decode('utf-8').split('data: ') - for data in datas: - if not data or "[DONE]" in data: - continue - data_json = json.loads(data) - content = data_json['choices'][0]['delta'].get('content') - if content: - yield content - - - @staticmethod - def random_token(e): - token = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" - n = len(token) - return "".join([token[random.randint(0, n - 1)] for i in range(e)]) - - @staticmethod - def encrypt(e): - t = ChatCompletion.random_token(16).encode('utf-8') - n = ChatCompletion.random_token(16).encode('utf-8') - r = e.encode('utf-8') - cipher = AES.new(t, AES.MODE_CBC, n) - ciphertext = cipher.encrypt(ChatCompletion.__pad_data(r)) - return binascii.hexlify(ciphertext).decode('utf-8') + t.decode('utf-8') + n.decode('utf-8') - - @staticmethod - def __pad_data(data: bytes) -> bytes: - block_size = AES.block_size - padding_size = block_size - len(data) % block_size - padding = bytes([padding_size] * padding_size) - return data + padding - - -class Completion: - @staticmethod - def create(prompt:str,proxy:str=None): - return ChatCompletion.create([ - { - "content": "You are ChatGPT, a large language model trained by OpenAI.\nCarefully heed the user's instructions. \nRespond using Markdown.", - "role": "system" - }, - {"role": "user", "content": prompt} - ], proxy) - - -if __name__ == '__main__': - # single completion - text = "" - for chunk in Completion.create("你是谁", "127.0.0.1:7890"): - text = text + chunk - print(chunk, end="", flush=True) - print() - - - #chat completion - message = [] - while True: - prompt = input("请输入问题:") - message.append({"role": "user","content": prompt}) - text = "" - for chunk in ChatCompletion.create(message,'127.0.0.1:7890'): - text = text+chunk - print(chunk, end="", flush=True) - print() - message.append({"role": "assistant", "content": text}) \ No newline at end of file diff --git a/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/kadirnar/BioGpt/utils.py b/spaces/kadirnar/BioGpt/utils.py deleted file mode 100644 index aad209806d5459ea9dbd45b148e988061696350e..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/BioGpt/utils.py +++ /dev/null @@ -1,106 +0,0 @@ -from bs4 import BeautifulSoup -import requests - - -lang_ids = { - "Afrikaans": "af", - "Amharic": "am", - "Arabic": "ar", - "Asturian": "ast", - "Azerbaijani": "az", - "Bashkir": "ba", - "Belarusian": "be", - "Bulgarian": "bg", - "Bengali": "bn", - "Breton": "br", - "Bosnian": "bs", - "Catalan": "ca", - "Cebuano": "ceb", - "Czech": "cs", - "Welsh": "cy", - "Danish": "da", - "German": "de", - "Greeek": "el", - "English": "en", - "Spanish": "es", - "Estonian": "et", - "Persian": "fa", - "Fulah": "ff", - "Finnish": "fi", - "French": "fr", - "Western Frisian": "fy", - "Irish": "ga", - "Gaelic": "gd", - "Galician": "gl", - "Gujarati": "gu", - "Hausa": "ha", - "Hebrew": "he", - "Hindi": "hi", - "Croatian": "hr", - "Haitian": "ht", - "Hungarian": "hu", - "Armenian": "hy", - "Indonesian": "id", - "Igbo": "ig", - "Iloko": "ilo", - "Icelandic": "is", - "Italian": "it", - "Japanese": "ja", - "Javanese": "jv", - "Georgian": "ka", - "Kazakh": "kk", - "Central Khmer": "km", - "Kannada": "kn", - "Korean": "ko", - "Luxembourgish": "lb", - "Ganda": "lg", - "Lingala": "ln", - "Lao": "lo", - "Lithuanian": "lt", - "Latvian": "lv", - "Malagasy": "mg", - "Macedonian": "mk", - "Malayalam": "ml", - "Mongolian": "mn", - "Marathi": "mr", - "Malay": "ms", - "Burmese": "my", - "Nepali": "ne", - "Dutch": "nl", - "Norwegian": "no", - "Northern Sotho": "ns", - "Occitan": "oc", - "Oriya": "or", - "Panjabi": "pa", - "Polish": "pl", - "Pushto": "ps", - "Portuguese": "pt", - "Romanian": "ro", - "Russian": "ru", - "Sindhi": "sd", - "Sinhala": "si", - "Slovak": "sk", - "Slovenian": "sl", - "Somali": "so", - "Albanian": "sq", - "Serbian": "sr", - "Swati": "ss", - "Sundanese": "su", - "Swedish": "sv", - "Swahili": "sw", - "Tamil": "ta", - "Thai": "th", - "Tagalog": "tl", - "Tswana": "tn", - "Turkish": "tr", - "Ukrainian": "uk", - "Urdu": "ur", - "Uzbek": "uz", - "Vietnamese": "vi", - "Wolof": "wo", - "Xhosa": "xh", - "Yiddish": "yi", - "Yoruba": "yo", - "Chinese": "zh", - "Zulu": "zu", -} diff --git a/spaces/karynaur/mnist-cloned/README.md b/spaces/karynaur/mnist-cloned/README.md deleted file mode 100644 index 426b486c5bb4b1d93e22c3f8c3b32c357b1a84eb..0000000000000000000000000000000000000000 --- a/spaces/karynaur/mnist-cloned/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Mnist -emoji: 👁 -colorFrom: green -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: ayaanzaveri/mnist ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/keras-io/conv-lstm/README.md b/spaces/keras-io/conv-lstm/README.md deleted file mode 100644 index 4e02b85b23aa348e3a6a25fe8b19ca29687d136e..0000000000000000000000000000000000000000 --- a/spaces/keras-io/conv-lstm/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Next-Frame Video Prediction -emoji: 📽️ -colorFrom: yellow -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at diff --git a/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/generation.py b/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/generation.py deleted file mode 100644 index ad474d770235c7b665218e64699fb0b0b1b8cc3f..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-with-Voice-Cloning/bark/generation.py +++ /dev/null @@ -1,864 +0,0 @@ -import contextlib -import gc -import os -import re -import requests -import gc -import sys - -from encodec import EncodecModel -import funcy -import logging -import numpy as np -from scipy.special import softmax -import torch -import torch.nn.functional as F -import tqdm -from transformers import BertTokenizer -from huggingface_hub import hf_hub_download, hf_hub_url - -from .model import GPTConfig, GPT -from .model_fine import FineGPT, FineGPTConfig -from .settings import initenv - -initenv(sys.argv) -global_force_cpu = os.environ.get("BARK_FORCE_CPU", False) -if ( - global_force_cpu != True and - torch.cuda.is_available() and - hasattr(torch.cuda, "amp") and - hasattr(torch.cuda.amp, "autocast") and - hasattr(torch.cuda, "is_bf16_supported") and - torch.cuda.is_bf16_supported() -): - autocast = funcy.partial(torch.cuda.amp.autocast, dtype=torch.bfloat16) -else: - @contextlib.contextmanager - def autocast(): - yield - - -# hold models in global scope to lazy load -global models -models = {} - -global models_devices -models_devices = {} - - -CONTEXT_WINDOW_SIZE = 1024 - -SEMANTIC_RATE_HZ = 49.9 -SEMANTIC_VOCAB_SIZE = 10_000 - -CODEBOOK_SIZE = 1024 -N_COARSE_CODEBOOKS = 2 -N_FINE_CODEBOOKS = 8 -COARSE_RATE_HZ = 75 - -SAMPLE_RATE = 24_000 - - -SUPPORTED_LANGS = [ - ("English", "en"), - ("German", "de"), - ("Spanish", "es"), - ("French", "fr"), - ("Hindi", "hi"), - ("Italian", "it"), - ("Japanese", "ja"), - ("Korean", "ko"), - ("Polish", "pl"), - ("Portuguese", "pt"), - ("Russian", "ru"), - ("Turkish", "tr"), - ("Chinese", "zh"), -] - -ALLOWED_PROMPTS = {"announcer"} -for _, lang in SUPPORTED_LANGS: - for prefix in ("", f"v2{os.path.sep}"): - for n in range(10): - ALLOWED_PROMPTS.add(f"{prefix}{lang}_speaker_{n}") - - -logger = logging.getLogger(__name__) - - -CUR_PATH = os.path.dirname(os.path.abspath(__file__)) - - -#default_cache_dir = os.path.join(os.path.expanduser("~"), ".cache") -#CACHE_DIR = os.path.join(os.getenv("XDG_CACHE_HOME", default_cache_dir), "suno", "bark_v0") -#CACHE_DIR = os.path.join(os.getcwd(), "models" -CACHE_DIR = "./models" - - -def _cast_bool_env_var(s): - return s.lower() in ('true', '1', 't') - -USE_SMALL_MODELS = _cast_bool_env_var(os.environ.get("SUNO_USE_SMALL_MODELS", "False")) -GLOBAL_ENABLE_MPS = _cast_bool_env_var(os.environ.get("SUNO_ENABLE_MPS", "False")) -OFFLOAD_CPU = _cast_bool_env_var(os.environ.get("SUNO_OFFLOAD_CPU", "False")) - -REMOTE_MODEL_PATHS = { - "text_small": { - "repo_id": "suno/bark", - "file_name": "text.pt", - }, - "coarse_small": { - "repo_id": "suno/bark", - "file_name": "coarse.pt", - }, - "fine_small": { - "repo_id": "suno/bark", - "file_name": "fine.pt", - }, - "text": { - "repo_id": "suno/bark", - "file_name": "text_2.pt", - }, - "coarse": { - "repo_id": "suno/bark", - "file_name": "coarse_2.pt", - }, - "fine": { - "repo_id": "suno/bark", - "file_name": "fine_2.pt", - }, -} - - -if not hasattr(torch.nn.functional, 'scaled_dot_product_attention') and torch.cuda.is_available(): - logger.warning( - "torch version does not support flash attention. You will get faster" + - " inference speed by upgrade torch to newest nightly version." - ) - - -def grab_best_device(use_gpu=True): - if torch.cuda.device_count() > 0 and use_gpu: - device = "cuda" - elif torch.backends.mps.is_available() and use_gpu and GLOBAL_ENABLE_MPS: - device = "mps" - else: - device = "cpu" - return device - - -def _get_ckpt_path(model_type, use_small=False): - key = model_type - if use_small or USE_SMALL_MODELS: - key += "_small" - return os.path.join(CACHE_DIR, REMOTE_MODEL_PATHS[key]["file_name"]) - -""" -def _download(from_hf_path, file_name, destfilename): - os.makedirs(CACHE_DIR, exist_ok=True) - hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR, local_dir_use_symlinks=False) - # Bug in original repo? Downloaded name differs from expected... - if not os.path.exists(destfilename): - localname = os.path.join(CACHE_DIR, file_name) - os.rename(localname, destfilename) -""" -def _download(from_hf_path, file_name): - os.makedirs(CACHE_DIR, exist_ok=True) - hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR) - - -class InferenceContext: - def __init__(self, benchmark=False): - # we can't expect inputs to be the same length, so disable benchmarking by default - self._chosen_cudnn_benchmark = benchmark - self._cudnn_benchmark = None - - def __enter__(self): - self._cudnn_benchmark = torch.backends.cudnn.benchmark - torch.backends.cudnn.benchmark = self._chosen_cudnn_benchmark - - def __exit__(self, exc_type, exc_value, exc_traceback): - torch.backends.cudnn.benchmark = self._cudnn_benchmark - - -if torch.cuda.is_available(): - torch.backends.cuda.matmul.allow_tf32 = True - torch.backends.cudnn.allow_tf32 = True - - -@contextlib.contextmanager -def _inference_mode(): - with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast(): - yield - - -def _clear_cuda_cache(): - if torch.cuda.is_available(): - torch.cuda.empty_cache() - torch.cuda.synchronize() - - -def clean_models(model_key=None): - global models - model_keys = [model_key] if model_key is not None else models.keys() - for k in model_keys: - if k in models: - del models[k] - _clear_cuda_cache() - gc.collect() - - -def _load_model(ckpt_path, device, use_small=False, model_type="text"): - if model_type == "text": - ConfigClass = GPTConfig - ModelClass = GPT - elif model_type == "coarse": - ConfigClass = GPTConfig - ModelClass = GPT - elif model_type == "fine": - ConfigClass = FineGPTConfig - ModelClass = FineGPT - else: - raise NotImplementedError() - - # Force-remove Models to allow running on >12Gb GPU - # CF: Probably not needed anymore - #global models - #models.clear() - #gc.collect() - #torch.cuda.empty_cache() - # to here... - - model_key = f"{model_type}_small" if use_small or USE_SMALL_MODELS else model_type - model_info = REMOTE_MODEL_PATHS[model_key] - if not os.path.exists(ckpt_path): - logger.info(f"{model_type} model not found, downloading into `{CACHE_DIR}`.") - ## added next two lines to make it super clear which model is being downloaded - remote_filename = hf_hub_url(model_info["repo_id"], model_info["file_name"]) - print(f"Downloading {model_key} {model_info['repo_id']} remote model file {remote_filename} {model_info['file_name']} to {CACHE_DIR}") - _download(model_info["repo_id"], model_info["file_name"]) - # add next line to make it super clear which model is being loaded - print(f"Loading {model_key} model from {ckpt_path} to {device}") # added - checkpoint = torch.load(ckpt_path, map_location=device) - # this is a hack - model_args = checkpoint["model_args"] - if "input_vocab_size" not in model_args: - model_args["input_vocab_size"] = model_args["vocab_size"] - model_args["output_vocab_size"] = model_args["vocab_size"] - del model_args["vocab_size"] - gptconf = ConfigClass(**checkpoint["model_args"]) - model = ModelClass(gptconf) - state_dict = checkpoint["model"] - # fixup checkpoint - unwanted_prefix = "_orig_mod." - for k, v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix) :]] = state_dict.pop(k) - extra_keys = set(state_dict.keys()) - set(model.state_dict().keys()) - extra_keys = set([k for k in extra_keys if not k.endswith(".attn.bias")]) - missing_keys = set(model.state_dict().keys()) - set(state_dict.keys()) - missing_keys = set([k for k in missing_keys if not k.endswith(".attn.bias")]) - if len(extra_keys) != 0: - raise ValueError(f"extra keys found: {extra_keys}") - if len(missing_keys) != 0: - raise ValueError(f"missing keys: {missing_keys}") - model.load_state_dict(state_dict, strict=False) - n_params = model.get_num_params() - val_loss = checkpoint["best_val_loss"].item() - logger.info(f"model loaded: {round(n_params/1e6,1)}M params, {round(val_loss,3)} loss") - model.eval() - model.to(device) - del checkpoint, state_dict - _clear_cuda_cache() - if model_type == "text": - tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased") - return { - "model": model, - "tokenizer": tokenizer, - } - return model - - -def _load_codec_model(device): - model = EncodecModel.encodec_model_24khz() - model.set_target_bandwidth(6.0) - model.eval() - model.to(device) - _clear_cuda_cache() - return model - - -def load_model(use_gpu=True, use_small=False, force_reload=False, model_type="text"): - _load_model_f = funcy.partial(_load_model, model_type=model_type, use_small=use_small) - if model_type not in ("text", "coarse", "fine"): - raise NotImplementedError() - global models - global models_devices - device = grab_best_device(use_gpu=use_gpu) - model_key = f"{model_type}" - if OFFLOAD_CPU: - models_devices[model_key] = device - device = "cpu" - if model_key not in models or force_reload: - ckpt_path = _get_ckpt_path(model_type, use_small=use_small) - clean_models(model_key=model_key) - model = _load_model_f(ckpt_path, device) - models[model_key] = model - if model_type == "text": - models[model_key]["model"].to(device) - else: - models[model_key].to(device) - return models[model_key] - - -def load_codec_model(use_gpu=True, force_reload=False): - global models - global models_devices - device = grab_best_device(use_gpu=use_gpu) - if device == "mps": - # encodec doesn't support mps - device = "cpu" - model_key = "codec" - if OFFLOAD_CPU: - models_devices[model_key] = device - device = "cpu" - if model_key not in models or force_reload: - clean_models(model_key=model_key) - model = _load_codec_model(device) - models[model_key] = model - models[model_key].to(device) - return models[model_key] - - -def preload_models( - text_use_gpu=True, - text_use_small=False, - coarse_use_gpu=True, - coarse_use_small=False, - fine_use_gpu=True, - fine_use_small=False, - codec_use_gpu=True, - force_reload=False -): - """Load all the necessary models for the pipeline.""" - if grab_best_device() == "cpu" and ( - text_use_gpu or coarse_use_gpu or fine_use_gpu or codec_use_gpu - ): - logger.warning("No GPU being used. Careful, inference might be very slow!") - _ = load_model( - model_type="text", use_gpu=text_use_gpu, use_small=text_use_small, force_reload=force_reload - ) - _ = load_model( - model_type="coarse", - use_gpu=coarse_use_gpu, - use_small=coarse_use_small, - force_reload=force_reload, - ) - _ = load_model( - model_type="fine", use_gpu=fine_use_gpu, use_small=fine_use_small, force_reload=force_reload - ) - _ = load_codec_model(use_gpu=codec_use_gpu, force_reload=force_reload) - - -#### -# Generation Functionality -#### - - -def _tokenize(tokenizer, text): - return tokenizer.encode(text, add_special_tokens=False) - - -def _detokenize(tokenizer, enc_text): - return tokenizer.decode(enc_text) - - -def _normalize_whitespace(text): - return re.sub(r"\s+", " ", text).strip() - - -TEXT_ENCODING_OFFSET = 10_048 -SEMANTIC_PAD_TOKEN = 10_000 -TEXT_PAD_TOKEN = 129_595 -SEMANTIC_INFER_TOKEN = 129_599 - - -def _load_history_prompt(history_prompt_input): - if isinstance(history_prompt_input, str) and history_prompt_input.endswith(".npz"): - history_prompt = np.load(history_prompt_input) - elif isinstance(history_prompt_input, str): - # make sure this works on non-ubuntu - history_prompt_input = os.path.join(*history_prompt_input.split("/")) -# if history_prompt_input not in ALLOWED_PROMPTS: -# raise ValueError("history prompt not found") - history_prompt = np.load( - os.path.join(CUR_PATH, "assets", "prompts", f"{history_prompt_input}.npz") - ) - elif isinstance(history_prompt_input, dict): - assert("semantic_prompt" in history_prompt_input) - assert("coarse_prompt" in history_prompt_input) - assert("fine_prompt" in history_prompt_input) - history_prompt = history_prompt_input - else: - raise ValueError("history prompt format unrecognized") - return history_prompt - - -def generate_text_semantic( - text, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - min_eos_p=0.2, - max_gen_duration_s=None, - allow_early_stop=True, - use_kv_caching=False, -): - """Generate semantic tokens from text.""" - assert isinstance(text, str) - text = _normalize_whitespace(text) - assert len(text.strip()) > 0 - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - semantic_history = history_prompt["semantic_prompt"] - assert ( - isinstance(semantic_history, np.ndarray) - and len(semantic_history.shape) == 1 - and len(semantic_history) > 0 - and semantic_history.min() >= 0 - and semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1 - ) - else: - semantic_history = None - # load models if not yet exist - global models - global models_devices - if "text" not in models: - preload_models() - model_container = models["text"] - model = model_container["model"] - tokenizer = model_container["tokenizer"] - encoded_text = np.array(_tokenize(tokenizer, text)) + TEXT_ENCODING_OFFSET - if OFFLOAD_CPU: - model.to(models_devices["text"]) - device = next(model.parameters()).device - if len(encoded_text) > 256: - p = round((len(encoded_text) - 256) / len(encoded_text) * 100, 1) - logger.warning(f"warning, text too long, lopping of last {p}%") - encoded_text = encoded_text[:256] - encoded_text = np.pad( - encoded_text, - (0, 256 - len(encoded_text)), - constant_values=TEXT_PAD_TOKEN, - mode="constant", - ) - if semantic_history is not None: - semantic_history = semantic_history.astype(np.int64) - # lop off if history is too long, pad if needed - semantic_history = semantic_history[-256:] - semantic_history = np.pad( - semantic_history, - (0, 256 - len(semantic_history)), - constant_values=SEMANTIC_PAD_TOKEN, - mode="constant", - ) - else: - semantic_history = np.array([SEMANTIC_PAD_TOKEN] * 256) - x = torch.from_numpy( - np.hstack([ - encoded_text, semantic_history, np.array([SEMANTIC_INFER_TOKEN]) - ]).astype(np.int64) - )[None] - assert x.shape[1] == 256 + 256 + 1 - with _inference_mode(): - x = x.to(device) - n_tot_steps = 768 - # custom tqdm updates since we don't know when eos will occur - pbar = tqdm.tqdm(disable=silent, total=100) - pbar_state = 0 - tot_generated_duration_s = 0 - kv_cache = None - for n in range(n_tot_steps): - if use_kv_caching and kv_cache is not None: - x_input = x[:, [-1]] - else: - x_input = x - logits, kv_cache = model( - x_input, merge_context=True, use_cache=use_kv_caching, past_kv=kv_cache - ) - relevant_logits = logits[0, 0, :SEMANTIC_VOCAB_SIZE] - if allow_early_stop: - relevant_logits = torch.hstack( - (relevant_logits, logits[0, 0, [SEMANTIC_PAD_TOKEN]]) # eos - ) - if top_p is not None: - # faster to convert to numpy - original_device = relevant_logits.device - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(original_device) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = F.softmax(relevant_logits / temp, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - item_next = torch.multinomial(probs, num_samples=1) - probs = probs.to(inf_device) - item_next = item_next.to(inf_device) - if allow_early_stop and ( - item_next == SEMANTIC_VOCAB_SIZE - or (min_eos_p is not None and probs[-1] >= min_eos_p) - ): - # eos found, so break - pbar.update(100 - pbar_state) - break - x = torch.cat((x, item_next[None]), dim=1) - tot_generated_duration_s += 1 / SEMANTIC_RATE_HZ - if max_gen_duration_s is not None and tot_generated_duration_s > max_gen_duration_s: - pbar.update(100 - pbar_state) - break - if n == n_tot_steps - 1: - pbar.update(100 - pbar_state) - break - del logits, relevant_logits, probs, item_next - req_pbar_state = np.min([100, int(round(100 * n / n_tot_steps))]) - if req_pbar_state > pbar_state: - pbar.update(req_pbar_state - pbar_state) - pbar_state = req_pbar_state - pbar.close() - out = x.detach().cpu().numpy().squeeze()[256 + 256 + 1 :] - if OFFLOAD_CPU: - model.to("cpu") - assert all(0 <= out) and all(out < SEMANTIC_VOCAB_SIZE) - _clear_cuda_cache() - return out - - -def _flatten_codebooks(arr, offset_size=CODEBOOK_SIZE): - assert len(arr.shape) == 2 - arr = arr.copy() - if offset_size is not None: - for n in range(1, arr.shape[0]): - arr[n, :] += offset_size * n - flat_arr = arr.ravel("F") - return flat_arr - - -COARSE_SEMANTIC_PAD_TOKEN = 12_048 -COARSE_INFER_TOKEN = 12_050 - - -def generate_coarse( - x_semantic, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - max_coarse_history=630, # min 60 (faster), max 630 (more context) - sliding_window_len=60, - use_kv_caching=False, -): - """Generate coarse audio codes from semantic tokens.""" -# CF: Uncommented because it breaks swap voice more than once -# assert ( -# isinstance(x_semantic, np.ndarray) -# and len(x_semantic.shape) == 1 -# and len(x_semantic) > 0 -# and x_semantic.min() >= 0 -# and x_semantic.max() <= SEMANTIC_VOCAB_SIZE - 1 -# ) - assert 60 <= max_coarse_history <= 630 - assert max_coarse_history + sliding_window_len <= 1024 - 256 - semantic_to_coarse_ratio = COARSE_RATE_HZ / SEMANTIC_RATE_HZ * N_COARSE_CODEBOOKS - max_semantic_history = int(np.floor(max_coarse_history / semantic_to_coarse_ratio)) - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - x_semantic_history = history_prompt["semantic_prompt"] - x_coarse_history = history_prompt["coarse_prompt"] - assert ( - isinstance(x_semantic_history, np.ndarray) - and len(x_semantic_history.shape) == 1 - and len(x_semantic_history) > 0 - and x_semantic_history.min() >= 0 - and x_semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1 - and isinstance(x_coarse_history, np.ndarray) - and len(x_coarse_history.shape) == 2 - and x_coarse_history.shape[0] == N_COARSE_CODEBOOKS - and x_coarse_history.shape[-1] >= 0 - and x_coarse_history.min() >= 0 - and x_coarse_history.max() <= CODEBOOK_SIZE - 1 - #and ( - # round(x_coarse_history.shape[-1] / len(x_semantic_history), 1) - # == round(semantic_to_coarse_ratio / N_COARSE_CODEBOOKS, 1) - #) - ) - x_coarse_history = _flatten_codebooks(x_coarse_history) + SEMANTIC_VOCAB_SIZE - # trim histories correctly - n_semantic_hist_provided = np.min( - [ - max_semantic_history, - len(x_semantic_history) - len(x_semantic_history) % 2, - int(np.floor(len(x_coarse_history) / semantic_to_coarse_ratio)), - ] - ) - n_coarse_hist_provided = int(round(n_semantic_hist_provided * semantic_to_coarse_ratio)) - x_semantic_history = x_semantic_history[-n_semantic_hist_provided:].astype(np.int32) - x_coarse_history = x_coarse_history[-n_coarse_hist_provided:].astype(np.int32) - # TODO: bit of a hack for time alignment (sounds better) - x_coarse_history = x_coarse_history[:-2] - else: - x_semantic_history = np.array([], dtype=np.int32) - x_coarse_history = np.array([], dtype=np.int32) - # load models if not yet exist - global models - global models_devices - if "coarse" not in models: - preload_models() - model = models["coarse"] - if OFFLOAD_CPU: - model.to(models_devices["coarse"]) - device = next(model.parameters()).device - # start loop - n_steps = int( - round( - np.floor(len(x_semantic) * semantic_to_coarse_ratio / N_COARSE_CODEBOOKS) - * N_COARSE_CODEBOOKS - ) - ) - assert n_steps > 0 and n_steps % N_COARSE_CODEBOOKS == 0 - x_semantic = np.hstack([x_semantic_history, x_semantic]).astype(np.int32) - x_coarse = x_coarse_history.astype(np.int32) - base_semantic_idx = len(x_semantic_history) - with _inference_mode(): - x_semantic_in = torch.from_numpy(x_semantic)[None].to(device) - x_coarse_in = torch.from_numpy(x_coarse)[None].to(device) - n_window_steps = int(np.ceil(n_steps / sliding_window_len)) - n_step = 0 - for _ in tqdm.tqdm(range(n_window_steps), total=n_window_steps, disable=silent): - semantic_idx = base_semantic_idx + int(round(n_step / semantic_to_coarse_ratio)) - # pad from right side - x_in = x_semantic_in[:, np.max([0, semantic_idx - max_semantic_history]) :] - x_in = x_in[:, :256] - x_in = F.pad( - x_in, - (0, 256 - x_in.shape[-1]), - "constant", - COARSE_SEMANTIC_PAD_TOKEN, - ) - x_in = torch.hstack( - [ - x_in, - torch.tensor([COARSE_INFER_TOKEN])[None].to(device), - x_coarse_in[:, -max_coarse_history:], - ] - ) - kv_cache = None - for _ in range(sliding_window_len): - if n_step >= n_steps: - continue - is_major_step = n_step % N_COARSE_CODEBOOKS == 0 - - if use_kv_caching and kv_cache is not None: - x_input = x_in[:, [-1]] - else: - x_input = x_in - - logits, kv_cache = model(x_input, use_cache=use_kv_caching, past_kv=kv_cache) - logit_start_idx = ( - SEMANTIC_VOCAB_SIZE + (1 - int(is_major_step)) * CODEBOOK_SIZE - ) - logit_end_idx = ( - SEMANTIC_VOCAB_SIZE + (2 - int(is_major_step)) * CODEBOOK_SIZE - ) - relevant_logits = logits[0, 0, logit_start_idx:logit_end_idx] - if top_p is not None: - # faster to convert to numpy - original_device = relevant_logits.device - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(original_device) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = F.softmax(relevant_logits / temp, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - item_next = torch.multinomial(probs, num_samples=1) - probs = probs.to(inf_device) - item_next = item_next.to(inf_device) - item_next += logit_start_idx - x_coarse_in = torch.cat((x_coarse_in, item_next[None]), dim=1) - x_in = torch.cat((x_in, item_next[None]), dim=1) - del logits, relevant_logits, probs, item_next - n_step += 1 - del x_in - del x_semantic_in - if OFFLOAD_CPU: - model.to("cpu") - gen_coarse_arr = x_coarse_in.detach().cpu().numpy().squeeze()[len(x_coarse_history) :] - del x_coarse_in - assert len(gen_coarse_arr) == n_steps - gen_coarse_audio_arr = gen_coarse_arr.reshape(-1, N_COARSE_CODEBOOKS).T - SEMANTIC_VOCAB_SIZE - for n in range(1, N_COARSE_CODEBOOKS): - gen_coarse_audio_arr[n, :] -= n * CODEBOOK_SIZE - _clear_cuda_cache() - return gen_coarse_audio_arr - - -def generate_fine( - x_coarse_gen, - history_prompt=None, - temp=0.5, - silent=True, -): - """Generate full audio codes from coarse audio codes.""" - assert ( - isinstance(x_coarse_gen, np.ndarray) - and len(x_coarse_gen.shape) == 2 - and 1 <= x_coarse_gen.shape[0] <= N_FINE_CODEBOOKS - 1 - and x_coarse_gen.shape[1] > 0 - and x_coarse_gen.min() >= 0 - and x_coarse_gen.max() <= CODEBOOK_SIZE - 1 - ) - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - x_fine_history = history_prompt["fine_prompt"] - assert ( - isinstance(x_fine_history, np.ndarray) - and len(x_fine_history.shape) == 2 - and x_fine_history.shape[0] == N_FINE_CODEBOOKS - and x_fine_history.shape[1] >= 0 - and x_fine_history.min() >= 0 - and x_fine_history.max() <= CODEBOOK_SIZE - 1 - ) - else: - x_fine_history = None - n_coarse = x_coarse_gen.shape[0] - # load models if not yet exist - global models - global models_devices - if "fine" not in models: - preload_models() - model = models["fine"] - if OFFLOAD_CPU: - model.to(models_devices["fine"]) - device = next(model.parameters()).device - # make input arr - in_arr = np.vstack( - [ - x_coarse_gen, - np.zeros((N_FINE_CODEBOOKS - n_coarse, x_coarse_gen.shape[1])) - + CODEBOOK_SIZE, # padding - ] - ).astype(np.int32) - # prepend history if available (max 512) - if x_fine_history is not None: - x_fine_history = x_fine_history.astype(np.int32) - in_arr = np.hstack( - [ - x_fine_history[:, -512:].astype(np.int32), - in_arr, - ] - ) - n_history = x_fine_history[:, -512:].shape[1] - else: - n_history = 0 - n_remove_from_end = 0 - # need to pad if too short (since non-causal model) - if in_arr.shape[1] < 1024: - n_remove_from_end = 1024 - in_arr.shape[1] - in_arr = np.hstack( - [ - in_arr, - np.zeros((N_FINE_CODEBOOKS, n_remove_from_end), dtype=np.int32) + CODEBOOK_SIZE, - ] - ) - # we can be lazy about fractional loop and just keep overwriting codebooks - n_loops = np.max([0, int(np.ceil((x_coarse_gen.shape[1] - (1024 - n_history)) / 512))]) + 1 - with _inference_mode(): - in_arr = torch.tensor(in_arr.T).to(device) - for n in tqdm.tqdm(range(n_loops), disable=silent): - start_idx = np.min([n * 512, in_arr.shape[0] - 1024]) - start_fill_idx = np.min([n_history + n * 512, in_arr.shape[0] - 512]) - rel_start_fill_idx = start_fill_idx - start_idx - in_buffer = in_arr[start_idx : start_idx + 1024, :][None] - for nn in range(n_coarse, N_FINE_CODEBOOKS): - logits = model(nn, in_buffer) - if temp is None: - relevant_logits = logits[0, rel_start_fill_idx:, :CODEBOOK_SIZE] - codebook_preds = torch.argmax(relevant_logits, -1) - else: - relevant_logits = logits[0, :, :CODEBOOK_SIZE] / temp - probs = F.softmax(relevant_logits, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - codebook_preds = torch.hstack( - [ - torch.multinomial(probs[nnn], num_samples=1).to(inf_device) - for nnn in range(rel_start_fill_idx, 1024) - ] - ) - in_buffer[0, rel_start_fill_idx:, nn] = codebook_preds - del logits, codebook_preds - # transfer over info into model_in and convert to numpy - for nn in range(n_coarse, N_FINE_CODEBOOKS): - in_arr[ - start_fill_idx : start_fill_idx + (1024 - rel_start_fill_idx), nn - ] = in_buffer[0, rel_start_fill_idx:, nn] - del in_buffer - gen_fine_arr = in_arr.detach().cpu().numpy().squeeze().T - del in_arr - if OFFLOAD_CPU: - model.to("cpu") - gen_fine_arr = gen_fine_arr[:, n_history:] - if n_remove_from_end > 0: - gen_fine_arr = gen_fine_arr[:, :-n_remove_from_end] - assert gen_fine_arr.shape[-1] == x_coarse_gen.shape[-1] - _clear_cuda_cache() - return gen_fine_arr - - -def codec_decode(fine_tokens): - """Turn quantized audio codes into audio array using encodec.""" - # load models if not yet exist - global models - global models_devices - if "codec" not in models: - preload_models() - model = models["codec"] - if OFFLOAD_CPU: - model.to(models_devices["codec"]) - device = next(model.parameters()).device - arr = torch.from_numpy(fine_tokens)[None] - arr = arr.to(device) - arr = arr.transpose(0, 1) - emb = model.quantizer.decode(arr) - out = model.decoder(emb) - audio_arr = out.detach().cpu().numpy().squeeze() - del arr, emb, out - if OFFLOAD_CPU: - model.to("cpu") - return audio_arr diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/util/__init__.py b/spaces/kevinwang676/VoiceChangers/src/face3d/util/__init__.py deleted file mode 100644 index 04eecb58b62f8c9d11d17606c6241d278a48b9b9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/util/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -"""This package includes a miscellaneous collection of useful helper functions.""" -from src.face3d.util import * - diff --git a/spaces/kevkev05/Chat-To-Sequence/DNAseq.py b/spaces/kevkev05/Chat-To-Sequence/DNAseq.py deleted file mode 100644 index 2a73da5c8fc05cb2dead4d2aed6d76e355818822..0000000000000000000000000000000000000000 --- a/spaces/kevkev05/Chat-To-Sequence/DNAseq.py +++ /dev/null @@ -1,51 +0,0 @@ -from sequence import Sequence - - -class DNAseq(Sequence): - def get_base_counts(self): - base_counts = { - 'a': self.get_unit_count('a'), - 't': self.get_unit_count('t'), - 'g': self.get_unit_count('g'), - 'c': self.get_unit_count('c'), - } - return base_counts - # Total number of each base within the sequence returned as a dictionary - - def get_base_percentages(self): - base_percentages = { - 'a': self.get_unit_percentage('a'), - 't': self.get_unit_percentage('t'), - 'g': self.get_unit_percentage('g'), - 'c': self.get_unit_percentage('c'), - } - return base_percentages - # Base content percentage for each base returned as a dictionary - - def get_gc_content(self): - total_bases = self.get_seq_length() - gc_count = self.sequence.count('g') + self.sequence.count('c') - gc_content = (gc_count / total_bases) * 100 - return gc_content - # Guanine Cytosine (gc) content by percentage - - def get_at_content(self): - total_bases = self.get_seq_length() - at_count = self.sequence.count('a') + self.sequence.count('t') - at_content = (at_count / total_bases) * 100 - return at_content - # Adenine Thymine (at) content by percentage - - def get_purine_content(self): - total_bases = self.get_seq_length() - ag_count = self.sequence.count('a') + self.sequence.count('g') - ag_content = (ag_count / total_bases) * 100 - return ag_content - # Adenine Guanine (purine) content by percentage - - def get_pyrimidine_content(self): - total_bases = self.get_seq_length() - ct_count = self.sequence.count('c') + self.sequence.count('t') - ct_content = (ct_count / total_bases) * 100 - return ct_content - # Cytosine Thymine (pyrimidine) content by percentage diff --git a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/lib/fctTile.py b/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/lib/fctTile.py deleted file mode 100644 index aa2415d9b5f6b221e14f3bceb9553deaf61418fc..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/lib/fctTile.py +++ /dev/null @@ -1 +0,0 @@ -#--- factory class for tile operations \ No newline at end of file diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/danet_r50-d8.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/danet_r50-d8.py deleted file mode 100644 index 2c934939fac48525f22ad86f489a041dd7db7d09..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/danet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DAHead', - in_channels=2048, - in_index=3, - channels=512, - pam_channels=64, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py deleted file mode 100644 index a33e7972877f902d0e7d18401ca675e3e4e60a18..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py +++ /dev/null @@ -1,51 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='FCNHead', - in_channels=64, - in_index=4, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/kornia/edge_detector/README.md b/spaces/kornia/edge_detector/README.md deleted file mode 100644 index 0a9d6caed5bdce7cbe38fc814a35ecdf5f8989fc..0000000000000000000000000000000000000000 --- a/spaces/kornia/edge_detector/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Edge Detector -emoji: 🔥 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/krazyxki/V-1488abed/src/proxy/rewriters/index.ts b/spaces/krazyxki/V-1488abed/src/proxy/rewriters/index.ts deleted file mode 100644 index 9d617ed845586e571f64c95d99a41ed927bab211..0000000000000000000000000000000000000000 --- a/spaces/krazyxki/V-1488abed/src/proxy/rewriters/index.ts +++ /dev/null @@ -1,14 +0,0 @@ -import type { Request } from "express"; -import type { ClientRequest } from "http"; -import type { ProxyReqCallback } from "http-proxy"; - -export { addKey } from "./add-key"; -export { languageFilter } from "./language-filter"; -export { limitOutputTokens } from "./limit-output-tokens"; -export { finalizeBody } from "./finalize-body"; -export { transformKoboldPayload } from "./transform-kobold-payload"; - -export type ExpressHttpProxyReqCallback = ProxyReqCallback< - ClientRequest, - Request ->; diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/subset/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/subset/__init__.py deleted file mode 100644 index 7e716f862116bc02279969bbeb785c2d2fac078b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/subset/__init__.py +++ /dev/null @@ -1,3739 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod - -from fontTools import config -from fontTools.misc.roundTools import otRound -from fontTools import ttLib -from fontTools.ttLib.tables import otTables -from fontTools.ttLib.tables.otBase import USE_HARFBUZZ_REPACKER -from fontTools.otlLib.maxContextCalc import maxCtxFont -from fontTools.pens.basePen import NullPen -from fontTools.misc.loggingTools import Timer -from fontTools.misc.cliTools import makeOutputFileName -from fontTools.subset.util import _add_method, _uniq_sort -from fontTools.subset.cff import * -from fontTools.subset.svg import * -from fontTools.varLib import varStore # for subset_varidxes -import sys -import struct -import array -import logging -from collections import Counter, defaultdict -from functools import reduce -from types import MethodType - -__usage__ = "pyftsubset font-file [glyph...] [--option=value]..." - -__doc__ = ( - """\ -pyftsubset -- OpenType font subsetter and optimizer - -pyftsubset is an OpenType font subsetter and optimizer, based on fontTools. -It accepts any TT- or CFF-flavored OpenType (.otf or .ttf) or WOFF (.woff) -font file. The subsetted glyph set is based on the specified glyphs -or characters, and specified OpenType layout features. - -The tool also performs some size-reducing optimizations, aimed for using -subset fonts as webfonts. Individual optimizations can be enabled or -disabled, and are enabled by default when they are safe. - -Usage: """ - + __usage__ - + """ - -At least one glyph or one of --gids, --gids-file, --glyphs, --glyphs-file, ---text, --text-file, --unicodes, or --unicodes-file, must be specified. - -Args: - -font-file - The input font file. -glyph - Specify one or more glyph identifiers to include in the subset. Must be - PS glyph names, or the special string '*' to keep the entire glyph set. - -Initial glyph set specification -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -These options populate the initial glyph set. Same option can appear -multiple times, and the results are accummulated. - ---gids=[,...] - Specify comma/whitespace-separated list of glyph IDs or ranges as decimal - numbers. For example, --gids=10-12,14 adds glyphs with numbers 10, 11, - 12, and 14. - ---gids-file= - Like --gids but reads from a file. Anything after a '#' on any line is - ignored as comments. - ---glyphs=[,...] - Specify comma/whitespace-separated PS glyph names to add to the subset. - Note that only PS glyph names are accepted, not gidNNN, U+XXXX, etc - that are accepted on the command line. The special string '*' will keep - the entire glyph set. - ---glyphs-file= - Like --glyphs but reads from a file. Anything after a '#' on any line - is ignored as comments. - ---text= - Specify characters to include in the subset, as UTF-8 string. - ---text-file= - Like --text but reads from a file. Newline character are not added to - the subset. - ---unicodes=[,...] - Specify comma/whitespace-separated list of Unicode codepoints or - ranges as hex numbers, optionally prefixed with 'U+', 'u', etc. - For example, --unicodes=41-5a,61-7a adds ASCII letters, so does - the more verbose --unicodes=U+0041-005A,U+0061-007A. - The special strings '*' will choose all Unicode characters mapped - by the font. - ---unicodes-file= - Like --unicodes, but reads from a file. Anything after a '#' on any - line in the file is ignored as comments. - ---ignore-missing-glyphs - Do not fail if some requested glyphs or gids are not available in - the font. - ---no-ignore-missing-glyphs - Stop and fail if some requested glyphs or gids are not available - in the font. [default] - ---ignore-missing-unicodes [default] - Do not fail if some requested Unicode characters (including those - indirectly specified using --text or --text-file) are not available - in the font. - ---no-ignore-missing-unicodes - Stop and fail if some requested Unicode characters are not available - in the font. - Note the default discrepancy between ignoring missing glyphs versus - unicodes. This is for historical reasons and in the future - --no-ignore-missing-unicodes might become default. - -Other options -^^^^^^^^^^^^^ - -For the other options listed below, to see the current value of the option, -pass a value of '?' to it, with or without a '='. - -Examples:: - - $ pyftsubset --glyph-names? - Current setting for 'glyph-names' is: False - $ ./pyftsubset --name-IDs=? - Current setting for 'name-IDs' is: [0, 1, 2, 3, 4, 5, 6] - $ ./pyftsubset --hinting? --no-hinting --hinting? - Current setting for 'hinting' is: True - Current setting for 'hinting' is: False - -Output options -^^^^^^^^^^^^^^ - ---output-file= - The output font file. If not specified, the subsetted font - will be saved in as font-file.subset. - ---flavor= - Specify flavor of output font file. May be 'woff' or 'woff2'. - Note that WOFF2 requires the Brotli Python extension, available - at https://github.com/google/brotli - ---with-zopfli - Use the Google Zopfli algorithm to compress WOFF. The output is 3-8 % - smaller than pure zlib, but the compression speed is much slower. - The Zopfli Python bindings are available at: - https://pypi.python.org/pypi/zopfli - ---harfbuzz-repacker - By default, we serialize GPOS/GSUB using the HarfBuzz Repacker when - uharfbuzz can be imported and is successful, otherwise fall back to - the pure-python serializer. Set the option to force using the HarfBuzz - Repacker (raises an error if uharfbuzz can't be found or fails). - ---no-harfbuzz-repacker - Always use the pure-python serializer even if uharfbuzz is available. - -Glyph set expansion -^^^^^^^^^^^^^^^^^^^ - -These options control how additional glyphs are added to the subset. - ---retain-gids - Retain glyph indices; just empty glyphs not needed in-place. - ---notdef-glyph - Add the '.notdef' glyph to the subset (ie, keep it). [default] - ---no-notdef-glyph - Drop the '.notdef' glyph unless specified in the glyph set. This - saves a few bytes, but is not possible for Postscript-flavored - fonts, as those require '.notdef'. For TrueType-flavored fonts, - this works fine as long as no unsupported glyphs are requested - from the font. - ---notdef-outline - Keep the outline of '.notdef' glyph. The '.notdef' glyph outline is - used when glyphs not supported by the font are to be shown. It is not - needed otherwise. - ---no-notdef-outline - When including a '.notdef' glyph, remove its outline. This saves - a few bytes. [default] - ---recommended-glyphs - Add glyphs 0, 1, 2, and 3 to the subset, as recommended for - TrueType-flavored fonts: '.notdef', 'NULL' or '.null', 'CR', 'space'. - Some legacy software might require this, but no modern system does. - ---no-recommended-glyphs - Do not add glyphs 0, 1, 2, and 3 to the subset, unless specified in - glyph set. [default] - ---no-layout-closure - Do not expand glyph set to add glyphs produced by OpenType layout - features. Instead, OpenType layout features will be subset to only - rules that are relevant to the otherwise-specified glyph set. - ---layout-features[+|-]=[,...] - Specify (=), add to (+=) or exclude from (-=) the comma-separated - set of OpenType layout feature tags that will be preserved. - Glyph variants used by the preserved features are added to the - specified subset glyph set. By default, 'calt', 'ccmp', 'clig', 'curs', - 'dnom', 'frac', 'kern', 'liga', 'locl', 'mark', 'mkmk', 'numr', 'rclt', - 'rlig', 'rvrn', and all features required for script shaping are - preserved. To see the full list, try '--layout-features=?'. - Use '*' to keep all features. - Multiple --layout-features options can be provided if necessary. - Examples: - - --layout-features+=onum,pnum,ss01 - * Keep the default set of features and 'onum', 'pnum', 'ss01'. - --layout-features-='mark','mkmk' - * Keep the default set of features but drop 'mark' and 'mkmk'. - --layout-features='kern' - * Only keep the 'kern' feature, drop all others. - --layout-features='' - * Drop all features. - --layout-features='*' - * Keep all features. - --layout-features+=aalt --layout-features-=vrt2 - * Keep default set of features plus 'aalt', but drop 'vrt2'. - ---layout-scripts[+|-]= - - Can a Model Be Differentially Private and Fair? - - - - - - - - - - - - - - - -
    - -
    - -

    Can a Model Be Differentially Private and Fair?

    -
    Training models with differential privacy stops models from inadvertently leaking sensitive data, but there's an unexpected side-effect: reduced accuracy on underrepresented subgroups.
    -

    Imagine you want to use machine learning to suggest new bands to listen to. You could do this by having lots of people list their favorite bands and using them to train a model. The trained model might be quite useful and fun, but if someone pokes and prods at the model in just the right way, they could extract the music preferences of someone whose data was used to train the model. Other kinds of models are potentially vulnerable; credit card numbers have been pulled out of language models and actual faces reconstructed from image models.

    -

    Training with differential privacy limits the information about any one data point that is extractable but in some cases there’s an unexpected side-effect: reduced accuracy with underrepresented subgroups disparately impacted.

    -
    - -

    Recall that machine learning models are typically trained with gradient descent, a series of small steps taken to minimize an error function. To show how a model can leak its training data, we’ve trained two simple models to separate red and blue dots using two simple datasets that differ in one way: a single isolated data point in the upper left has been switched from red to blue.

    -
    - -

    Notice that the two models have very different boundary lines near the isolated point by the end of the training. Someone with access to the trained model might be able to infer if the point in the upper left is red or blue — if the color represented sensitive information, like someone’s voting record, that could be quite bad!

    -

    Protecting the Privacy of Training Points

    -

    We can prevent a single data point from drastically altering the model by adding two operations to each training step:²

    -
      -
    • ⚬ Clipping the gradient (here, limiting how much the boundary line can move with each step) to bound the maximum impact a single data point can have on the final model.
    • -
    • ⚬ Adding random noise to the gradient.
    • -
    -

    Try increasing the random noise below. We’re now training lots of differentially private models; the more the potential models for the red and blue outlier points overlap, the more plausible deniability the person in the upper left has.

    -
    - -

    You can also try dragging the other points around and adjusting the gradient clipping. Are points in the center or outliers more likely to modify the boundary lines? In two dimensions there’s a limited number of outliers, but in higher dimensions more points are outliers and much more information can be extracted from a trained model.

    -

    Correctly combined, adding gradient clipping and random noise to gradient descent make it possible to train a model with differential privacy – we can guarantee that a model trained on a given dataset is essentially indistinguishable from a model trained on the same dataset with a single point changed.

    -

    Predictions on Outliers Change the Most

    -

    What does this look like in practice? In Distribution Density, Tails, and Outliers in Machine Learning, a series of increasingly differentially private models were trained on MNIST digits. Every digit in the training set was ranked according to the highest level of privacy that correctly classified it.

    -
    - -

    On the lower left, you can see digits labeled as “3” in the training data that look more like a “2” and a “9”. They’re very different from the other “3”s in the training data so adding just a bit of privacy protection causes the model to no longer classify them as “3”. Under some specific circumstances, differential privacy can actually improve how well the model generalizes to data it wasn’t trained on by limiting the influence of spurious examples.

    -

    The right side shows more canonical digits which are classified correctly even with high levels of privacy because they’re quite similar to other digits in the training data.

    -

    The Accuracy Tradeoff

    -

    Limiting how much a model can learn from a single example does have a downside: it can also decrease the model’s accuracy. With 7,500 training points, 90% accuracy on MNIST digits is only achievable with an extremely low level of privacy protection; increasing privacy quickly lowers the model’s accuracy.

    -

    Collecting more training data offers a way out of this accuracy/privacy tradeoff. With 60,000 training points, 90% accuracy can be reached with a higher privacy level than almost all real-world deployments of differential privacy.

    -
    - -

    Looking at the differences between predictions by digit class shows another potential complication: some classes are harder to identify than others. Detecting an “8” with high confidence requires more training data and/or lower privacy than detecting a “0” with high confidence.

    -
    - -

    This problem is exacerbated if the training data has fewer examples of one class than the others. Trying to predict an uncommon event with a differentially private model can require an enormous amount of data.

    -

    Implications for Fairness

    -

    Outliers also aren’t evenly distributed within a class. Below, MNIST digits are colored by their sensitivity to higher privacy levels and projected with UMAP, forming several clusters of privacy-sensitive yellow digits. It’s possible to inadvertently train a model with good overall accuracy on a class but very low accuracy on a smaller group within the class.

    -
    - -

    There’s nothing that makes a “1” slanted to the left intrinsically harder to classify, but because there are only a few slanted “1”s in the training data it’s difficult to make a model that classifies them accurately without leaking information.

    -

    This disparate impact doesn’t just happen in datasets of differently drawn digits: increased levels of differential privacy in a range of image and language models disproportionality decreased accuracy on underrepresented subgroups. And adding differential privacy to a medical model reduced the influence of Black patients’ data on the model while increasing the influence of white patients’ data.

    -

    Lowering the privacy level might not help non-majoritarian data points either – they’re the ones most susceptible to having their information exposed. Again, escaping the accuracy/privacy tradeoff requires collecting more data – this time from underrepresented subgroups.

    -

    More Reading

    -

    There are deep connections between generalization, memorization and privacy that are still not well understood. Slightly changing the privacy constraints, for example, can create new options. If public, unlabeled data exists, a “Private Aggregation of Teacher Ensembles“ could be used instead of gradient clipping and random noise to train a differentially private model with a smaller disparate impact on accuracy.

    -

    Finding ways to increase privacy with a smaller impact on accuracy is an active area of research – model architectures designed with privacy in mind and better dataset cleaning look like promising avenues.

    -

    There are also additional accuracy/privacy/fairness tradeoffs beyond what’s discussed in this post. Even if a differentially private model doesn’t have large accuracy gaps between subgroups, enforcing fairness metrics can reduce privacy or accuracy.

    -

    This post focuses on protecting the privacy of individual data points. In practice more work might be necessary to ensure that the privacy of users – who could contribute much more than a single data point each – is also protected.

    -

    These questions are also significant outside of machine learning. Allocating resources based on a differentially private dataset – with no machine learning model involved – can also disproportionately affect different groups. The 2020 Census is the first to use differential privacy and this could have a wide range of impacts, including how congressional districts are drawn.

    -

    Credits

    -

    Adam Pearce // January 2022

    -

    Thanks to Abhradeep Thakurta, Andreas Terzis, Andy Coenen, Asma Ghandeharioun, Brendan McMahan, Ellen Jiang, Emily Reif, Fernanda Viégas, James Wexler, Kevin Robinson, Matthew Jagielski, Martin Wattenberg, Meredith Morris, Miguel Guevara, Nicolas Papernot and Nithum Thain for their help with this piece.

    -

    Footnotes

    -

    To speed up training at the cost of looser privacy bounds, gradients, clipping and noise can be calculated on a group of data points instead of individual data points.

    -

    The “ε” in ε-differential privacy essentially measures the overlap in two distributions after changing a single data point.

    -

    Clipping and noising are also used outside of differential privacy as regularization techniques to improve accuracy.

    In addition to accidently mislabeled examples, differential privacy can also provide some protection against data poisoning attacks.

    -

    While visually similar digits aren’t necessarily interpreted in similar ways by the model, the clustering of visually similar digits in the UMAP diagram at the bottom of the page (which projects embedding from the penultimate layer of digit classifier) suggests there is a close connection here.

    -

    Rebalancing the dataset without collecting more data doesn’t avoid this privacy/accuracy tradeoff – upsampling the smaller class reduces privacy and downsampling the larger class reduces data and lowers accuracy.

    -

    See the appendix on Subgroup Size and Accuracy for more detail.

    -

    Appendix: Subgroup Size and Accuracy

    -

    How, exactly, does the amount of training data, the privacy level and the percentage of data from a subgroup impact accuracy? Using MNIST digits rotated 90° as a stand-in for a smaller subgroup, we can see how the accuracy of a series of simple models that classify “1”s and “7”s change based on these attributes.

    -

    On the far left, models without any rotated digits in the training data never classify those digits more accurately than random guessing. By rotating 5% of the training digits, a small slice of models with lots of training data and low privacy can accurately classify rotated digits.

    -
    - -

    Increasing the proportion of rotated digits to 10% or 20% or even more makes it possible to train a higher privacy model that performs well on both types of digits with the same amount of training data.

    -

    Click on one of the models above and you can see how the accuracy gap shifts as number of training points, privacy level and percentage of rotated digits are independently changed.

    -
    - -

    Intuitively, adding more training data has diminishing marginal increases to accuracy. Accuracy on the smaller group of rotated digits, which may just be on the cusp of being learned, falls off faster as the effective amount of training data is decreased — a disparate reduction in accuracy.

    -

    More Explorables

    -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/mike-ravkine/can-ai-code-results/Dockerfile b/spaces/mike-ravkine/can-ai-code-results/Dockerfile deleted file mode 100644 index 8778b920fb66fb7f73012cf30e2efce5e1e717bc..0000000000000000000000000000000000000000 --- a/spaces/mike-ravkine/can-ai-code-results/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN git clone https://github.com/the-crypt-keeper/can-ai-code.git /code/can_ai_code - -WORKDIR /code/can_ai_code - -CMD ["streamlit", "run", "app.py", "--server.address", "0.0.0.0", "--server.port", "7860"] diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer/transformer_base.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer/transformer_base.py deleted file mode 100644 index b4d5604dbbae979b424650882d33b45ebab323e6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/transformer/transformer_base.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoderDecoderModel -from fairseq.models.transformer import ( - TransformerEncoderBase, - TransformerDecoderBase, - TransformerConfig, -) -from torch import Tensor - - -class TransformerModelBase(FairseqEncoderDecoderModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani, et al, 2017) - `_. - - Args: - encoder (TransformerEncoder): the encoder - decoder (TransformerDecoder): the decoder - - The Transformer model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_parser - :prog: - """ - - def __init__(self, cfg, encoder, decoder): - super().__init__(encoder, decoder) - self.cfg = cfg - self.supports_align_args = True - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=False, with_prefix="" - ) - - @classmethod - def build_model(cls, cfg, task): - """Build a new model instance.""" - - # -- TODO T96535332 - # bug caused by interaction between OmegaConf II and argparsing - cfg.decoder.input_dim = int(cfg.decoder.input_dim) - cfg.decoder.output_dim = int(cfg.decoder.output_dim) - # -- - - if cfg.encoder.layers_to_keep: - cfg.encoder.layers = len(cfg.encoder.layers_to_keep.split(",")) - if cfg.decoder.layers_to_keep: - cfg.decoder.layers = len(cfg.decoder.layers_to_keep.split(",")) - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if cfg.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if cfg.encoder.embed_dim != cfg.decoder.embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if cfg.decoder.embed_path and ( - cfg.decoder.embed_path != cfg.encoder.embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - cfg.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = cls.build_embedding( - cfg, tgt_dict, cfg.decoder.embed_dim, cfg.decoder.embed_path - ) - if cfg.offload_activations: - cfg.checkpoint_activations = True # offloading implies checkpointing - encoder = cls.build_encoder(cfg, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - if not cfg.share_all_embeddings: - # fsdp_wrap is a no-op when --ddp-backend != fully_sharded - encoder = fsdp_wrap(encoder, min_num_params=cfg.min_params_to_wrap) - decoder = fsdp_wrap(decoder, min_num_params=cfg.min_params_to_wrap) - return cls(cfg, encoder, decoder) - - @classmethod - def build_embedding(cls, cfg, dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - @classmethod - def build_encoder(cls, cfg, src_dict, embed_tokens): - return TransformerEncoderBase(cfg, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, cfg, tgt_dict, embed_tokens): - return TransformerDecoderBase( - cfg, - tgt_dict, - embed_tokens, - no_encoder_attn=cfg.no_cross_attention, - ) - - # TorchScript doesn't support optional arguments with variable length (**kwargs). - # Current workaround is to add union of all arguments in child classes. - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - return_all_hiddens: bool = True, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - """ - Run the forward pass for an encoder-decoder model. - - Copied from the base class, but without ``**kwargs``, - which are not supported by TorchScript. - """ - encoder_out = self.encoder( - src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out - - # Since get_normalized_probs is in the Fairseq Model which is not scriptable, - # I rewrite the get_normalized_probs from Base Class to call the - # helper function in the Base Class. - @torch.jit.export - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/setup.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/setup.py deleted file mode 100644 index 052635be79b466d0ad56cf5cf607bd10c2297ecf..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/lightconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="lightconv_layer", - ext_modules=[ - CUDAExtension( - "lightconv_cuda", - [ - "lightconv_cuda.cpp", - "lightconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq_cli/train.py b/spaces/mshukor/UnIVAL/fairseq/fairseq_cli/train.py deleted file mode 100644 index 83475873138c5d1bac288c234afb6b4a1a7882d7..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq_cli/train.py +++ /dev/null @@ -1,514 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Train a new model on one or across multiple GPUs. -""" - -import argparse -import logging -import math -import os -import sys -from typing import Dict, Optional, Any, List, Tuple, Callable - -# We need to setup root logger before importing any fairseq libraries. -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.train") - -import numpy as np -import torch -from fairseq import ( - checkpoint_utils, - options, - quantization_utils, - tasks, - utils, -) -from fairseq.data import iterators, data_utils -from fairseq.data.plasma_utils import PlasmaStore -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import fsdp_enable_wrap, fsdp_wrap, utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics, progress_bar -from fairseq.model_parallel.megatron_trainer import MegatronTrainer -from fairseq.trainer import Trainer -from omegaconf import DictConfig, OmegaConf - - - - -def main(cfg: FairseqConfig) -> None: - if isinstance(cfg, argparse.Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - if distributed_utils.is_master(cfg.distributed_training) and "job_logging_cfg" in cfg: - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - logging.config.dictConfig(OmegaConf.to_container(cfg.job_logging_cfg)) - - assert ( - cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None - ), "Must specify batch size either with --max-tokens or --batch-size" - metrics.reset() - - if cfg.common.log_file is not None: - handler = logging.FileHandler(filename=cfg.common.log_file) - logger.addHandler(handler) - - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - if distributed_utils.is_master(cfg.distributed_training): - checkpoint_utils.verify_checkpoint_directory(cfg.checkpoint.save_dir) - - # Print args - logger.info(cfg) - - if cfg.checkpoint.write_checkpoints_asynchronously: - try: - import iopath # noqa: F401 - except ImportError: - logging.exception( - "Asynchronous checkpoint writing is specified but iopath is " - "not installed: `pip install iopath`" - ) - return - - # Setup task, e.g., translation, language modeling, etc. - task = tasks.setup_task(cfg.task) - - assert cfg.criterion, "Please specify criterion to train a model" - - # Build model and criterion - if cfg.distributed_training.ddp_backend == "fully_sharded": - with fsdp_enable_wrap(cfg.distributed_training): - model = fsdp_wrap(task.build_model(cfg.model)) - else: - model = task.build_model(cfg.model) - criterion = task.build_criterion(cfg.criterion) - logger.info(model) - logger.info("task: {}".format(task.__class__.__name__)) - logger.info("model: {}".format(model.__class__.__name__)) - logger.info("criterion: {}".format(criterion.__class__.__name__)) - logger.info( - "num. shared model params: {:,} (num. trained: {:,})".format( - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False) and p.requires_grad) - ) - ) - - logger.info( - "num. expert model params: {} (num. trained: {})".format( - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False) and p.requires_grad), - ) - ) - - # Load valid dataset (we load training data below, based on the latest checkpoint) - # We load the valid dataset AFTER building the model - data_utils.raise_if_valid_subsets_unintentionally_ignored(cfg) - if cfg.dataset.combine_valid_subsets: - task.load_dataset("valid", combine=True, epoch=1) - else: - for valid_sub_split in cfg.dataset.valid_subset.split(","): - task.load_dataset(valid_sub_split, combine=False, epoch=1) - - # (optionally) Configure quantization - if cfg.common.quantization_config_path is not None: - quantizer = quantization_utils.Quantizer( - config_path=cfg.common.quantization_config_path, - max_epoch=cfg.optimization.max_epoch, - max_update=cfg.optimization.max_update, - ) - else: - quantizer = None - - # Build trainer - if cfg.common.model_parallel_size == 1: - trainer = Trainer(cfg, task, model, criterion, quantizer) - else: - trainer = MegatronTrainer(cfg, task, model, criterion) - logger.info( - "training on {} devices (GPUs/TPUs)".format( - cfg.distributed_training.distributed_world_size - ) - ) - logger.info( - "max tokens per device = {} and max sentences per device = {}".format( - cfg.dataset.max_tokens, - cfg.dataset.batch_size, - ) - ) - - # Load the latest checkpoint if one is available and restore the - # corresponding train iterator - extra_state, epoch_itr = checkpoint_utils.load_checkpoint( - cfg.checkpoint, - trainer, - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - if cfg.common.tpu: - import torch_xla.core.xla_model as xm - xm.rendezvous("load_checkpoint") # wait for all workers - - max_epoch = cfg.optimization.max_epoch or math.inf - lr = trainer.get_lr() - - train_meter = meters.StopwatchMeter() - train_meter.start() - while epoch_itr.next_epoch_idx <= max_epoch: - if lr <= cfg.optimization.stop_min_lr: - logger.info( - f"stopping training because current learning rate ({lr}) is smaller " - "than or equal to minimum learning rate " - f"(--stop-min-lr={cfg.optimization.stop_min_lr})" - ) - break - - # train for one epoch - valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) - if should_stop: - break - - # only use first validation loss to update the learning rate - lr = trainer.lr_step(epoch_itr.epoch, valid_losses[0]) - - epoch_itr = trainer.get_train_iterator( - epoch_itr.next_epoch_idx, - # sharded data: get train iterator for next epoch - load_dataset=task.has_sharded_data("train"), - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - train_meter.stop() - logger.info("done training in {:.1f} seconds".format(train_meter.sum)) - - # ioPath implementation to wait for all asynchronous file writes to complete. - if cfg.checkpoint.write_checkpoints_asynchronously: - logger.info( - "ioPath PathManager waiting for all asynchronous checkpoint " - "writes to finish." - ) - PathManager.async_close() - logger.info("ioPath PathManager finished waiting.") - - -def should_stop_early(cfg: DictConfig, valid_loss: float) -> bool: - # skip check if no validation was done in the current epoch - if valid_loss is None: - return False - if cfg.checkpoint.patience <= 0: - return False - - def is_better(a, b): - return a > b if cfg.checkpoint.maximize_best_checkpoint_metric else a < b - - prev_best = getattr(should_stop_early, "best", None) - if prev_best is None or is_better(valid_loss, prev_best): - should_stop_early.best = valid_loss - should_stop_early.num_runs = 0 - return False - else: - should_stop_early.num_runs += 1 - if should_stop_early.num_runs >= cfg.checkpoint.patience: - logger.info( - "early stop since valid performance hasn't improved for last {} runs".format( - cfg.checkpoint.patience - ) - ) - return True - else: - return False - - -@metrics.aggregate("train") -def train( - cfg: DictConfig, trainer: Trainer, task: tasks.FairseqTask, epoch_itr -) -> Tuple[List[Optional[float]], bool]: - """Train the model for one epoch and return validation losses.""" - # Initialize data iterator - itr = epoch_itr.next_epoch_itr( - fix_batches_to_gpus=cfg.distributed_training.fix_batches_to_gpus, - shuffle=(epoch_itr.next_epoch_idx > cfg.dataset.curriculum), - ) - update_freq = ( - cfg.optimization.update_freq[epoch_itr.epoch - 1] - if epoch_itr.epoch <= len(cfg.optimization.update_freq) - else cfg.optimization.update_freq[-1] - ) - itr = iterators.GroupedIterator(itr, update_freq) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_file=cfg.common.log_file, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - azureml_logging=( - cfg.common.azureml_logging - if distributed_utils.is_master(cfg.distributed_training) - else False - ), - ) - progress.update_config(_flatten_config(cfg)) - - trainer.begin_epoch(epoch_itr.epoch) - - valid_subsets = cfg.dataset.valid_subset.split(",") - should_stop = False - num_updates = trainer.get_num_updates() - logger.info("Start iterating over samples") - for i, samples in enumerate(progress): - with metrics.aggregate("train_inner"), torch.autograd.profiler.record_function( - "train_step-%d" % i - ): - log_output = trainer.train_step(samples) - - if log_output is not None: # not OOM, overflow, ... - # log mid-epoch stats - num_updates = trainer.get_num_updates() - if num_updates % cfg.common.log_interval == 0: - stats = get_training_stats(metrics.get_smoothed_values("train_inner")) - progress.log(stats, tag="train_inner", step=num_updates) - - # reset mid-epoch stats after each log interval - # the end-of-epoch stats will still be preserved - metrics.reset_meters("train_inner") - - end_of_epoch = not itr.has_next() - valid_losses, should_stop = validate_and_save( - cfg, trainer, task, epoch_itr, valid_subsets, end_of_epoch - ) - - if should_stop: - break - - # log end-of-epoch stats - logger.info("end of epoch {} (average epoch stats below)".format(epoch_itr.epoch)) - stats = get_training_stats(metrics.get_smoothed_values("train")) - progress.print(stats, tag="train", step=num_updates) - - # reset epoch-level meters - metrics.reset_meters("train") - return valid_losses, should_stop - - -def _flatten_config(cfg: DictConfig): - config = OmegaConf.to_container(cfg) - # remove any legacy Namespaces and replace with a single "args" - namespace = None - for k, v in list(config.items()): - if isinstance(v, argparse.Namespace): - namespace = v - del config[k] - if namespace is not None: - config["args"] = vars(namespace) - return config - - -def validate_and_save( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - valid_subsets: List[str], - end_of_epoch: bool, -) -> Tuple[List[Optional[float]], bool]: - num_updates = trainer.get_num_updates() - max_update = cfg.optimization.max_update or math.inf - - # Stopping conditions (and an additional one based on validation loss later - # on) - should_stop = False - if num_updates >= max_update: - should_stop = True - logger.info( - f"Stopping training due to " - f"num_updates: {num_updates} >= max_update: {max_update}" - ) - - training_time_hours = trainer.cumulative_training_time() / (60 * 60) - if ( - cfg.optimization.stop_time_hours > 0 - and training_time_hours > cfg.optimization.stop_time_hours - ): - should_stop = True - logger.info( - f"Stopping training due to " - f"cumulative_training_time: {training_time_hours} > " - f"stop_time_hours: {cfg.optimization.stop_time_hours} hour(s)" - ) - - do_save = ( - (end_of_epoch and epoch_itr.epoch % cfg.checkpoint.save_interval == 0) - or should_stop - or ( - cfg.checkpoint.save_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.checkpoint.save_interval_updates == 0 - and num_updates >= cfg.dataset.validate_after_updates - ) - ) - do_validate = ( - (not end_of_epoch and do_save) # validate during mid-epoch saves - or (end_of_epoch and epoch_itr.epoch % cfg.dataset.validate_interval == 0) - or should_stop - or ( - cfg.dataset.validate_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.dataset.validate_interval_updates == 0 - ) - ) and not cfg.dataset.disable_validation and num_updates >= cfg.dataset.validate_after_updates - - # Validate - valid_losses = [None] - if do_validate: - valid_losses = validate(cfg, trainer, task, epoch_itr, valid_subsets) - - should_stop |= should_stop_early(cfg, valid_losses[0]) - - # Save checkpoint - if do_save or should_stop: - checkpoint_utils.save_checkpoint( - cfg.checkpoint, trainer, epoch_itr, valid_losses[0] - ) - - return valid_losses, should_stop - - -def get_training_stats(stats: Dict[str, Any]) -> Dict[str, Any]: - stats["wall"] = round(metrics.get_meter("default", "wall").elapsed_time, 0) - return stats - - -def validate( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - subsets: List[str], -) -> List[Optional[float]]: - """Evaluate the model on the validation set(s) and return the losses.""" - - if cfg.dataset.fixed_validation_seed is not None: - # set fixed seed for every validation - utils.set_torch_seed(cfg.dataset.fixed_validation_seed) - - trainer.begin_valid_epoch(epoch_itr.epoch) - valid_losses = [] - for subset in subsets: - logger.info('begin validation on "{}" subset'.format(subset)) - - # Initialize data iterator - itr = trainer.get_valid_iterator(subset).next_epoch_itr( - shuffle=False, set_dataset_epoch=False # use a fixed valid set - ) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - prefix=f"valid on '{subset}' subset", - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - ) - - # create a new root metrics aggregator so validation metrics - # don't pollute other aggregators (e.g., train meters) - with metrics.aggregate(new_root=True) as agg: - for i, sample in enumerate(progress): - if cfg.dataset.max_valid_steps is not None and i > cfg.dataset.max_valid_steps: - break - trainer.valid_step(sample) - - # log validation stats - stats = get_valid_stats(cfg, trainer, agg.get_smoothed_values()) - - if hasattr(task, "post_validate"): - task.post_validate(trainer.get_model(), stats, agg) - - progress.print(stats, tag=subset, step=trainer.get_num_updates()) - - valid_losses.append(stats[cfg.checkpoint.best_checkpoint_metric]) - return valid_losses - - -def get_valid_stats( - cfg: DictConfig, trainer: Trainer, stats: Dict[str, Any] -) -> Dict[str, Any]: - stats["num_updates"] = trainer.get_num_updates() - if hasattr(checkpoint_utils.save_checkpoint, "best"): - key = "best_{0}".format(cfg.checkpoint.best_checkpoint_metric) - best_function = max if cfg.checkpoint.maximize_best_checkpoint_metric else min - stats[key] = best_function( - checkpoint_utils.save_checkpoint.best, - stats[cfg.checkpoint.best_checkpoint_metric], - ) - return stats - - -def cli_main( - modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None -) -> None: - parser = options.get_training_parser() - args = options.parse_args_and_arch(parser, modify_parser=modify_parser) - - cfg = convert_namespace_to_omegaconf(args) - - if cfg.common.use_plasma_view: - server = PlasmaStore(path=cfg.common.plasma_path) - logger.info(f"Started plasma server pid {server.server.pid} {cfg.common.plasma_path}") - - if args.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - # if cfg.common.use_plasma_view: - # server.server.kill() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_iterators.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_iterators.py deleted file mode 100644 index 7b3dd4848553357e5e8326ed3a31cf5d68ceea94..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_iterators.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -from fairseq.data import iterators - - -class TestIterators(unittest.TestCase): - def test_counting_iterator_index(self, ref=None, itr=None): - # Test the indexing functionality of CountingIterator - if ref is None: - assert itr is None - ref = list(range(10)) - itr = iterators.CountingIterator(ref) - else: - assert len(ref) == 10 - assert itr is not None - - self.assertTrue(itr.has_next()) - self.assertEqual(itr.n, 0) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(itr.n, 1) - self.assertEqual(next(itr), ref[1]) - self.assertEqual(itr.n, 2) - itr.skip(3) - self.assertEqual(itr.n, 5) - self.assertEqual(next(itr), ref[5]) - itr.skip(2) - self.assertEqual(itr.n, 8) - self.assertEqual(list(itr), [ref[8], ref[9]]) - self.assertFalse(itr.has_next()) - - def test_counting_iterator_length_mismatch(self): - ref = list(range(10)) - # When the underlying iterable is longer than the CountingIterator, - # the remaining items in the iterable should be ignored - itr = iterators.CountingIterator(ref, total=8) - self.assertEqual(list(itr), ref[:8]) - # When the underlying iterable is shorter than the CountingIterator, - # raise an IndexError when the underlying iterable is exhausted - itr = iterators.CountingIterator(ref, total=12) - self.assertRaises(IndexError, list, itr) - - def test_counting_iterator_take(self): - # Test the "take" method of CountingIterator - ref = list(range(10)) - itr = iterators.CountingIterator(ref) - itr.take(5) - self.assertEqual(len(itr), len(list(iter(itr)))) - self.assertEqual(len(itr), 5) - - itr = iterators.CountingIterator(ref) - itr.take(5) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(next(itr), ref[1]) - itr.skip(2) - self.assertEqual(next(itr), ref[4]) - self.assertFalse(itr.has_next()) - - def test_grouped_iterator(self): - # test correctness - x = list(range(10)) - itr = iterators.GroupedIterator(x, 1) - self.assertEqual(list(itr), [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]) - itr = iterators.GroupedIterator(x, 4) - self.assertEqual(list(itr), [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9]]) - itr = iterators.GroupedIterator(x, 5) - self.assertEqual(list(itr), [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) - - # test the GroupIterator also works correctly as a CountingIterator - x = list(range(30)) - ref = list(iterators.GroupedIterator(x, 3)) - itr = iterators.GroupedIterator(x, 3) - self.test_counting_iterator_index(ref, itr) - - def test_sharded_iterator(self): - # test correctness - x = list(range(10)) - itr = iterators.ShardedIterator(x, num_shards=1, shard_id=0) - self.assertEqual(list(itr), x) - itr = iterators.ShardedIterator(x, num_shards=2, shard_id=0) - self.assertEqual(list(itr), [0, 2, 4, 6, 8]) - itr = iterators.ShardedIterator(x, num_shards=2, shard_id=1) - self.assertEqual(list(itr), [1, 3, 5, 7, 9]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0) - self.assertEqual(list(itr), [0, 3, 6, 9]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=1) - self.assertEqual(list(itr), [1, 4, 7, None]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=2) - self.assertEqual(list(itr), [2, 5, 8, None]) - - # test CountingIterator functionality - x = list(range(30)) - ref = list(iterators.ShardedIterator(x, num_shards=3, shard_id=0)) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0) - self.test_counting_iterator_index(ref, itr) - - def test_counting_iterator_buffered_iterator_take(self): - ref = list(range(10)) - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(len(itr), len(list(iter(itr)))) - self.assertEqual(len(itr), 5) - - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(len(buffered_itr), 5) - self.assertEqual(len(list(iter(buffered_itr))), 5) - - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(next(itr), ref[1]) - itr.skip(2) - self.assertEqual(next(itr), ref[4]) - self.assertFalse(itr.has_next()) - self.assertRaises(StopIteration, next, buffered_itr) - - ref = list(range(4, 10)) - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr, start=4) - itr.take(5) - self.assertEqual(len(itr), 5) - self.assertEqual(len(buffered_itr), 1) - self.assertEqual(next(itr), ref[0]) - self.assertFalse(itr.has_next()) - self.assertRaises(StopIteration, next, buffered_itr) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/mthsk/sovits-models-misc/inference/__init__.py b/spaces/mthsk/sovits-models-misc/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/multimodalart/mariogpt/app.py b/spaces/multimodalart/mariogpt/app.py deleted file mode 100644 index 50088dac1bf0b238ced0261e641fe787670af045..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/mariogpt/app.py +++ /dev/null @@ -1,109 +0,0 @@ - -import gradio as gr -import torch -import uuid -from mario_gpt.dataset import MarioDataset -from mario_gpt.prompter import Prompter -from mario_gpt.lm import MarioLM -from mario_gpt.utils import view_level, convert_level_to_png - -from fastapi import FastAPI -from fastapi.staticfiles import StaticFiles - -import os -import uvicorn - -mario_lm = MarioLM() -device = torch.device('cuda') -mario_lm = mario_lm.to(device) -TILE_DIR = "data/tiles" - -app = FastAPI() - -def make_html_file(generated_level): - level_text = f"""{''' -'''.join(view_level(generated_level,mario_lm.tokenizer))}""" - unique_id = uuid.uuid1() - with open(f"static/demo-{unique_id}.html", 'w', encoding='utf-8') as f: - f.write(f''' - - - - - Mario Game - - - - - - -''') - return f"demo-{unique_id}.html" - -def generate(pipes, enemies, blocks, elevation, temperature = 2.0, level_size = 1399, prompt = ""): - if prompt == "": - prompt = f"{pipes} pipes, {enemies} enemies, {blocks} blocks, {elevation} elevation" - print(f"Using prompt: {prompt}") - prompts = [prompt] - generated_level = mario_lm.sample( - prompts=prompts, - num_steps=level_size, - temperature=temperature, - use_tqdm=True - ) - filename = make_html_file(generated_level) - img = convert_level_to_png(generated_level.squeeze(), TILE_DIR, mario_lm.tokenizer)[0] - - gradio_html = f'''
    - -

    Press the arrow keys to move. Press a to run, s to jump and d to shoot fireflowers

    -
    ''' - return [img, gradio_html] - -with gr.Blocks().queue() as demo: - gr.Markdown('''### Playable demo for MarioGPT: Open-Ended Text2Level Generation through Large Language Models - [[Github](https://github.com/shyamsn97/mario-gpt)], [[Paper](https://arxiv.org/abs/2302.05981)] - ''') - with gr.Tabs(): - with gr.TabItem("Compose prompt"): - with gr.Row(): - pipes = gr.Radio(["no", "little", "some", "many"], label="How many pipes?") - enemies = gr.Radio(["no", "little", "some", "many"], label="How many enemies?") - with gr.Row(): - blocks = gr.Radio(["little", "some", "many"], label="How many blocks?") - elevation = gr.Radio(["low", "high"], label="Elevation?") - with gr.TabItem("Type prompt"): - text_prompt = gr.Textbox(value="", label="Enter your MarioGPT prompt. ex: 'many pipes, many enemies, some blocks, low elevation'") - - with gr.Accordion(label="Advanced settings", open=False): - temperature = gr.Number(value=2.0, label="temperature: Increase these for more diverse, but lower quality, generations") - level_size = gr.Slider(value=1399, precision=0, minimum=100, maximum=2799, step=1, label="level_size") - - btn = gr.Button("Generate level") - with gr.Row(): - with gr.Box(): - level_play = gr.HTML() - level_image = gr.Image() - btn.click(fn=generate, inputs=[pipes, enemies, blocks, elevation, temperature, level_size, text_prompt], outputs=[level_image, level_play]) - gr.Examples( - examples=[ - ["many", "many", "some", "high"], - ["no", "some", "many", "high", 2.0], - ["many", "many", "little", "low", 2.0], - ["no", "no", "many", "high", 2.4], - ], - inputs=[pipes, enemies, blocks, elevation], - outputs=[level_image, level_play], - fn=generate, - cache_examples=True, - ) - -app.mount("/static", StaticFiles(directory="static", html=True), name="static") -app = gr.mount_gradio_app(app, demo, "/", gradio_api_url="http://localhost:7860/") -uvicorn.run(app, host="0.0.0.0", port=7860) \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py deleted file mode 100644 index 738f76875c82aa412063bb5bff15e69c46f20362..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/env python3 - -import glob -import os - -import PIL.Image as Image -import cv2 -import numpy as np -import tqdm -import shutil - - -from saicinpainting.evaluation.utils import load_yaml - - -def generate_masks_for_img(infile, outmask_pattern, mask_size=200, step=0.5): - inimg = Image.open(infile) - width, height = inimg.size - step_abs = int(mask_size * step) - - mask = np.zeros((height, width), dtype='uint8') - mask_i = 0 - - for start_vertical in range(0, height - step_abs, step_abs): - for start_horizontal in range(0, width - step_abs, step_abs): - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 255 - - cv2.imwrite(outmask_pattern.format(mask_i), mask) - - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 0 - mask_i += 1 - - -def main(args): - if not args.indir.endswith('/'): - args.indir += '/' - if not args.outdir.endswith('/'): - args.outdir += '/' - - config = load_yaml(args.config) - - in_files = list(glob.glob(os.path.join(args.indir, '**', f'*{config.img_ext}'), recursive=True)) - for infile in tqdm.tqdm(in_files): - outimg = args.outdir + infile[len(args.indir):] - outmask_pattern = outimg[:-len(config.img_ext)] + '_mask{:04d}.png' - - os.makedirs(os.path.dirname(outimg), exist_ok=True) - shutil.copy2(infile, outimg) - - generate_masks_for_img(infile, outmask_pattern, **config.gen_kwargs) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to config for dataset generation') - aparser.add_argument('indir', type=str, help='Path to folder with images') - aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to') - - main(aparser.parse_args()) diff --git a/spaces/nateraw/lavila/main_finetune_classification.py b/spaces/nateraw/lavila/main_finetune_classification.py deleted file mode 100644 index 264d7ad78dd66d691fa1da26923f5e83b61c8f47..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/main_finetune_classification.py +++ /dev/null @@ -1,716 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -from collections import OrderedDict -import json -import math -import numpy as np -import os -import pandas as pd -import sys -import time - -import torch -import torch.nn as nn -import torch.backends.cudnn as cudnn -import torch.cuda.amp as amp -from torch.distributed.optim import ZeroRedundancyOptimizer -import torch.nn.parallel -import torchvision.transforms as transforms -import torchvision.transforms._transforms_video as transforms_video -from sklearn.metrics import confusion_matrix -import wandb - -from lavila.data import datasets -from lavila.data.video_transforms import Permute, SpatialCrop, TemporalCrop -from lavila.models import models -from lavila.models.tokenizer import (MyBertTokenizer, MyDistilBertTokenizer, MyGPT2Tokenizer, SimpleTokenizer) -from lavila.models.utils import inflate_positional_embeds -from lavila.utils import distributed as dist_utils -from lavila.utils.evaluation import accuracy, get_mean_accuracy -from lavila.utils.meter import AverageMeter, ProgressMeter -from lavila.utils.preprocess import generate_label_map -from lavila.utils.random import random_seed -from lavila.utils.scheduler import cosine_scheduler -from lavila.utils.evaluation_ek100cls import get_marginal_indexes, marginalize - - -def get_args_parser(): - parser = argparse.ArgumentParser(description='lavila finetune and evaluation', add_help=False) - # Data - parser.add_argument('--dataset', default='ek100_cls', type=str, - choices=['ek100_cls', 'egtea']) - parser.add_argument('--root', - default='datasets/EK100/video_ht256px/', - type=str, help='path to dataset root') - parser.add_argument('--metadata-train', - default='datasets/EK100/epic-kitchens-100-annotations/EPIC_100_train.csv', - type=str, help='path to metadata file (train set)') - parser.add_argument('--metadata-val', - default='datasets/EK100/epic-kitchens-100-annotations/EPIC_100_validation.csv', - type=str, help='path to metadata file (val set)') - parser.add_argument('--relevancy-path', - default='datasets/EK100/epic-kitchens-100-annotations/retrieval_annotations/relevancy/caption_relevancy_EPIC_100_retrieval_test.pkl', - type=str, help='path to relevancy matrix (val set)') - parser.add_argument('--output-dir', default='./', type=str, help='output dir') - parser.add_argument('--num-crops', default=1, type=int, help='number of crops in transforms for val') - parser.add_argument('--num-clips', default=1, type=int, help='number of clips for val') - parser.add_argument('--clip-length', default=16, type=int, help='clip length') - parser.add_argument('--clip-stride', default=2, type=int, help='clip stride') - parser.add_argument('--sparse-sample', action='store_true', help='switch to sparse sampling') - # Model - parser.add_argument('--pretrain-model', default='', type=str, help='path to pretrain model') - parser.add_argument('--resume', default='', type=str, help='path to resume from') - parser.add_argument('--find-unused-parameters', action='store_true', - help='do this during DDP (useful for models with tied weights)') - parser.add_argument('--drop-path-rate', default=0.1, type=float, help='drop path ratio') - parser.add_argument('--dropout-ratio', default=0.5, type=float, help='dropout ratio for the last linear layer') - parser.add_argument('--num-classes', default=3806, nargs='+', type=int, help='number of classes for the last linear layer') - parser.add_argument('--use-vn-classifier', action='store_true') - parser.add_argument('--use-half', action='store_true', help='use half precision at inference') - # Training - parser.add_argument('--epochs', default=100, type=int) - parser.add_argument('--warmup-epochs', default=1, type=int) - parser.add_argument('--start-epoch', default=0, type=int) - parser.add_argument('--batch-size', default=16, type=int, - help='number of samples per-device/per-gpu') - parser.add_argument('--use-sgd', action='store_true') - parser.add_argument('--freeze-temperature', action='store_true', help='freeze temperature if set to True') - parser.add_argument('--lr', default=3e-3, type=float) - parser.add_argument('--fix-lr', action='store_true', help='disable cosine lr decay if set True') - parser.add_argument('--lr-start', default=1e-6, type=float, - help='initial warmup lr') - parser.add_argument('--lr-end', default=1e-5, type=float, - help='minimum final lr') - parser.add_argument('--lr-multiplier-on-backbone', default=0.1, type=float, help='lr multiplier for the backbone') - parser.add_argument('--clip-grad-type', default='norm', choices=['norm', 'value']) - parser.add_argument('--clip-grad-value', default=None, type=float, help='') - parser.add_argument('--update-freq', default=1, type=int, - help='optimizer update frequency (i.e. gradient accumulation steps)') - parser.add_argument('--wd', default=0.01, type=float) - parser.add_argument('--betas', default=(0.9, 0.999), nargs=2, type=float) - parser.add_argument('--eps', default=1e-8, type=float) - parser.add_argument('--label-smoothing', default=0.1, type=float, help='label smoothing') - parser.add_argument('--eval-freq', default=5, type=int) - parser.add_argument('--save-freq', default=5, type=int) - parser.add_argument('--disable-amp', action='store_true', - help='disable mixed-precision training (requires more memory and compute)') - parser.add_argument('--use-zero', action='store_true', - help='use ZeroRedundancyOptimizer to save memory') - parser.add_argument('--use-checkpoint', action='store_true', - help='use gradient checkpointing during training for significantly less GPU usage') - # System - parser.add_argument('--print-freq', default=100, type=int, help='print frequency') - parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', - help='number of data loading workers per process') - parser.add_argument('--evaluate', action='store_true', help='eval only') - parser.add_argument('--world-size', default=1, type=int, - help='number of nodes for distributed training') - parser.add_argument('--rank', default=0, type=int, - help='node rank for distributed training') - parser.add_argument("--local_rank", type=int, default=0) - parser.add_argument('--dist-url', default='env://', type=str, - help='url used to set up distributed training') - parser.add_argument('--dist-backend', default='nccl', type=str) - parser.add_argument('--seed', default=0, type=int) - parser.add_argument('--gpu', default=None, type=int, help='GPU id to use.') - parser.add_argument('--wandb', action='store_true', help='Enable WandB logging') - return parser - - -def main(args): - dist_utils.init_distributed_mode(args) - - global best_acc1 - random_seed(args.seed, dist_utils.get_rank()) - - if args.pretrain_model: - ckpt_path = args.pretrain_model - else: - raise Exception('no checkpoint found') - ckpt = torch.load(ckpt_path, map_location='cpu') - - if args.use_vn_classifier: - assert args.dataset == 'ek100_cls' and len(args.num_classes) == 3 - - state_dict = OrderedDict() - for k, v in ckpt['state_dict'].items(): - state_dict[k.replace('module.', '')] = v - - old_args = ckpt['args'] - print("=> creating model: {}".format(old_args.model)) - model = getattr(models, old_args.model)( - pretrained=old_args.load_visual_pretrained, - pretrained2d=old_args.load_visual_pretrained is not None, - text_use_cls_token=old_args.use_cls_token, - project_embed_dim=old_args.project_embed_dim, - timesformer_gated_xattn=False, - timesformer_freeze_space=False, - num_frames=args.clip_length, - drop_path_rate=args.drop_path_rate, - ) - if 'TIMESFORMER' in old_args.model or 'EGOVLP' in old_args.model: - # inflate weight - print('=> inflating PE in models due to different frame numbers') - state_dict = inflate_positional_embeds( - model.state_dict(), state_dict, - num_frames=args.clip_length, - load_temporal_fix='bilinear', - ) - model.load_state_dict(state_dict, strict=True) - print("=> loaded resume checkpoint '{}' (epoch {})".format(ckpt_path, ckpt['epoch'])) - - if args.use_vn_classifier: - model = models.VideoClassifierMultiHead( - model.visual, - dropout=args.dropout_ratio, - num_classes_list=args.num_classes - ) - else: - assert len(args.num_classes) == 1 - model = models.VideoClassifier( - model.visual, - dropout=args.dropout_ratio, - num_classes=args.num_classes[0] - ) - - model.cuda(args.gpu) - - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel( - model, device_ids=[args.gpu], bucket_cap_mb=200, - find_unused_parameters=args.find_unused_parameters - ) - - p_wd, p_non_wd = [], [] - p_head_wd, p_head_non_wd = [], [] - for n, p in model.named_parameters(): - if 'fc_cls' in n: - if 'bias' in n: - p_head_non_wd.append(p) - else: - p_head_wd.append(p) - elif not p.requires_grad: - continue # frozen weights - elif p.ndim < 2 or 'bias' in n or 'ln' in n or 'bn' in n: - p_non_wd.append(p) - else: - p_wd.append(p) - - optim_params = [ - {"params": p_wd, "weight_decay": args.wd, "lr": args.lr * args.lr_multiplier_on_backbone}, - {"params": p_non_wd, "weight_decay": 0, "lr": args.lr * args.lr_multiplier_on_backbone}, - {"params": p_head_wd, "weight_decay": args.wd}, - {"params": p_head_non_wd, "weight_decay": 0} - ] - - if args.use_zero: - optimizer = ZeroRedundancyOptimizer( - optim_params, optimizer_class=torch.optim.SGD if args.use_sgd else torch.optim.AdamW, - lr=args.lr, betas=args.betas, eps=args.eps, weight_decay=args.wd - ) - else: - if args.use_sgd: - optimizer = torch.optim.SGD(optim_params, lr=args.lr, momentum=args.betas[0], weight_decay=args.wd) - else: - optimizer = torch.optim.AdamW(optim_params, lr=args.lr, betas=args.betas, - eps=args.eps, weight_decay=args.wd) - scaler = amp.GradScaler(enabled=not args.disable_amp) - # optionally resume from a checkpoint (takes precedence over autoresume) - latest = os.path.join(args.output_dir, 'checkpoint.pt') - if os.path.isfile(latest): - args.resume = '' - if args.resume: - if os.path.isfile(args.resume): - print("=> loading resume checkpoint '{}'".format(args.resume)) - checkpoint = torch.load(args.resume, map_location='cpu') - epoch = checkpoint['epoch'] if 'epoch' in checkpoint else 0 - args.start_epoch = epoch - if not args.distributed: - state_dict = OrderedDict() - for k, v in checkpoint['state_dict'].items(): - state_dict[k.replace('module.', '')] = v - result = model.load_state_dict(state_dict, strict=False) - else: - result = model.load_state_dict(checkpoint['state_dict'], strict=False) - print(result) - optimizer.load_state_dict(checkpoint['optimizer']) if 'optimizer' in checkpoint else () - scaler.load_state_dict(checkpoint['scaler']) if 'scaler' in checkpoint else () - best_acc1 = checkpoint['best_acc1'] - print("=> loaded resume checkpoint '{}' (epoch {}, best_metric = {})" - .format(args.resume, epoch, best_acc1)) - else: - print("=> no checkpoint found at '{}'".format(args.resume)) - else: - # auto-resume from latest checkpoint in output directory - latest = os.path.join(args.output_dir, 'checkpoint.pt') - if os.path.isfile(latest): - print("=> loading latest checkpoint '{}'".format(latest)) - latest_checkpoint = torch.load(latest, map_location='cpu') - args.start_epoch = latest_checkpoint['epoch'] - model.load_state_dict(latest_checkpoint['state_dict']) - optimizer.load_state_dict(latest_checkpoint['optimizer']) - scaler.load_state_dict(latest_checkpoint['scaler']) - best_acc1 = latest_checkpoint['best_acc1'] - print("=> loaded latest checkpoint '{}' (epoch {})" - .format(latest, latest_checkpoint['epoch'])) - - cudnn.benchmark = True - - # Data loading code - print("=> creating dataset") - if old_args.model.endswith('DISTILBERT_BASE'): - tokenizer = MyDistilBertTokenizer('distilbert-base-uncased') - elif old_args.model.endswith('BERT_BASE'): - tokenizer = MyBertTokenizer('bert-base-uncased') - elif old_args.model.endswith('BERT_LARGE'): - tokenizer = MyBertTokenizer('bert-large-uncased') - elif old_args.model.endswith('GPT2'): - tokenizer = MyGPT2Tokenizer('gpt2') - elif old_args.model.endswith('GPT2_MEDIUM'): - tokenizer = MyGPT2Tokenizer('gpt2-medium') - elif old_args.model.endswith('GPT2_LARGE'): - tokenizer = MyGPT2Tokenizer('gpt2-large') - elif old_args.model.endswith('GPT2_XL'): - tokenizer = MyGPT2Tokenizer('gpt2-xl') - else: - print("Using SimpleTokenizer because of model '{}'. " - "Please check if this is what you want".format(old_args.model)) - tokenizer = SimpleTokenizer() - - criterion = nn.CrossEntropyLoss(label_smoothing=args.label_smoothing).cuda(args.gpu) - - crop_size = 224 if '336PX' not in old_args.model else 336 - transforms_list = [ - Permute([3, 0, 1, 2]), # T H W C -> C T H W - transforms.RandomResizedCrop(crop_size, scale=(0.5, 1.0)), - transforms.RandomHorizontalFlip(p=0.5), - ] - if 'OPENAI' in old_args.model: - transforms_list.append(transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])) - else: - transforms_list.append(transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375])) - train_transform = transforms.Compose(transforms_list) - - val_transform = transforms.Compose([ - Permute([3, 0, 1, 2]), # T H W C -> C T H W - transforms.Resize(crop_size), - transforms.CenterCrop(crop_size), - (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if 'OPENAI' not in old_args.model else - transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])), - TemporalCrop(frames_per_clip=args.clip_length, stride=args.clip_length), - SpatialCrop(crop_size=crop_size, num_crops=args.num_crops), - ]) - - # build dataset - _, mapping_vn2act = generate_label_map(args.dataset) - if args.dataset == 'ek100_cls': - args.mapping_act2v = {i: int(vn.split(':')[0]) for (vn, i) in mapping_vn2act.items()} - args.mapping_act2n = {i: int(vn.split(':')[1]) for (vn, i) in mapping_vn2act.items()} - args.actions = pd.DataFrame.from_dict({'verb': args.mapping_act2v.values(), 'noun': args.mapping_act2n.values()}) - num_clips_at_val = args.num_clips - args.num_clips = 1 - train_dataset = datasets.get_downstream_dataset( - train_transform, tokenizer, args, subset='train', label_mapping=mapping_vn2act, - ) - args.num_clips = num_clips_at_val - val_dataset = datasets.get_downstream_dataset( - val_transform, tokenizer, args, subset='val', label_mapping=mapping_vn2act, - ) - - if args.distributed: - train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) - val_sampler = torch.utils.data.SequentialSampler(val_dataset) # disable distributed - else: - train_sampler = None - val_sampler = None - - train_loader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), - num_workers=args.workers, pin_memory=True, sampler=train_sampler, drop_last=True - ) - print('len(train_loader) = {}'.format(len(train_loader))) - val_loader = torch.utils.data.DataLoader( - val_dataset, batch_size=args.batch_size, shuffle=(val_sampler is None), - num_workers=args.workers, pin_memory=True, sampler=val_sampler, drop_last=False - ) - print('len(val_loader) = {}'.format(len(val_loader))) - - if args.evaluate: - if args.use_vn_classifier: - val_stats = validate_multihead(val_loader, model, args) - else: - val_stats = validate(val_loader, model, args) - return - - if args.fix_lr: - lr_schedule = None - else: - lr_schedule = cosine_scheduler( - args.lr, args.lr_end, args.epochs, len(train_loader) // args.update_freq, - warmup_epochs=args.warmup_epochs, start_warmup_value=args.lr_start, - ) - - if dist_utils.is_main_process() and args.wandb: - wandb_id = os.path.split(args.output_dir)[-1] - wandb.init(project='LaViLa', id=wandb_id, config=args, resume='allow') - - print(args) - - best_metric = 0. - print("=> beginning training") - for epoch in range(args.start_epoch, args.epochs): - if args.distributed: - train_sampler.set_epoch(epoch) - - train_stats = train(train_loader, model, criterion, optimizer, scaler, epoch, lr_schedule, args) - - is_epoch = ((epoch + 1) % args.save_freq) == 0 - - print('=> saving checkpoint') - dist_utils.save_on_master({ - 'epoch': epoch + 1, - 'state_dict': model.state_dict(), - 'optimizer': optimizer.state_dict(), - 'scaler': scaler.state_dict(), - 'best_acc1': 0, - 'args': args, - }, False, args.output_dir, is_epoch=is_epoch) - - if ((epoch + 1) % args.eval_freq) == 0: - if args.use_vn_classifier: - val_stats = validate_multihead(val_loader, model, args) - else: - val_stats = validate(val_loader, model, args) - if val_stats['acc1'] > best_metric: - is_best = True - best_metric = val_stats['acc1'] - else: - is_best = False - - print('=> saving checkpoint') - dist_utils.save_on_master({ - 'epoch': epoch + 1, - 'state_dict': model.state_dict(), - 'optimizer': optimizer.state_dict(), - 'scaler': scaler.state_dict(), - 'best_acc1': best_metric, - 'args': args, - }, is_best, args.output_dir, is_epoch=is_epoch) - - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'test_{k}': v for k, v in val_stats.items()}, - 'epoch': epoch} - - if dist_utils.is_main_process(): - if args.wandb: - wandb.log(log_stats) - with open(os.path.join(args.output_dir, 'log.txt'), 'a') as f: - f.write(json.dumps(log_stats) + '\n') - - -def train(train_loader, model, criterion, optimizer, scaler, epoch, lr_schedule, args): - batch_time = AverageMeter('Time', ':6.2f') - data_time = AverageMeter('Data', ':6.2f') - mem = AverageMeter('Mem (GB)', ':6.1f') - iters_per_epoch = len(train_loader) // args.update_freq - losses = AverageMeter('Loss', ':.4e') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - top1_noun = AverageMeter('Noun Acc@1', ':6.2f') - top1_verb = AverageMeter('Verb Acc@1', ':6.2f') - progress = ProgressMeter( - iters_per_epoch, - [batch_time, data_time, mem, losses, top1, top5, top1_noun, top1_verb], - prefix="Epoch: [{}]".format(epoch)) - - # switch to train mode - model.train() - - end = time.time() - for data_iter, (images, target) in enumerate(train_loader): - optim_iter = data_iter // args.update_freq - - # measure data loading time - data_time.update(time.time() - end) - - # update weight decay and learning rate according to their schedule - it = iters_per_epoch * epoch + optim_iter # global training iteration - for k, param_group in enumerate(optimizer.param_groups): - if lr_schedule is not None: - param_group['lr'] = lr_schedule[it] * args.lr_multiplier_on_backbone - else: - param_group['lr'] = lr_schedule[it] - - images = images.cuda(args.gpu, non_blocking=True) - target = target.cuda(args.gpu, non_blocking=True) - - # compute output - with amp.autocast(enabled=not args.disable_amp): - output = model(images, use_checkpoint=args.use_checkpoint) - if isinstance(output, list): - assert len(output) == 3 - target_to_verb = torch.tensor([args.mapping_act2v[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - loss = criterion(output[0], target_to_verb) - target_to_noun = torch.tensor([args.mapping_act2n[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - loss += criterion(output[1], target_to_noun) - loss += criterion(output[2], target) - else: - loss = criterion(output, target) - loss /= args.update_freq - - if not math.isfinite(loss.item()): - print("Loss is {}, stopping training".format(loss.item())) - sys.exit(1) - - scaler.scale(loss).backward() - - if (data_iter + 1) % args.update_freq != 0: - continue - - if args.clip_grad_value is not None: - scaler.unscale_(optimizer) - if args.clip_grad_type == 'norm': - torch.nn.utils.clip_grad_norm_( - model.parameters(), args.clip_grad_value, norm_type=2. - ) - elif args.clip_grad_type == 'value': - torch.nn.utils.clip_grad_value_(model.parameters(), args.clip_grad_value) - else: - assert False, f"Unknown clip mode ({args.clip_grad_type})." - # compute gradient and do SGD step - scaler.step(optimizer) - scaler.update() - model.zero_grad(set_to_none=True) - - if isinstance(output, list): - target_to_verb = torch.tensor([args.mapping_act2v[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - acc1_verb, _ = accuracy(output[0], target_to_verb, topk=(1, 5)) - top1_verb.update(acc1_verb.item(), images.size(0)) - target_to_noun = torch.tensor([args.mapping_act2n[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - acc1_noun, _ = accuracy(output[1], target_to_noun, topk=(1, 5)) - top1_noun.update(acc1_noun.item(), images.size(0)) - acc1, acc5 = accuracy(output[2], target, topk=(1, 5)) - losses.update(loss.item(), images.size(0)) - top1.update(acc1.item(), images.size(0)) - top5.update(acc5.item(), images.size(0)) - else: - output = torch.softmax(output, dim=1) - acc1, acc5 = accuracy(output, target, topk=(1, 5)) - losses.update(loss.item(), images.size(0)) - top1.update(acc1.item(), images.size(0)) - top5.update(acc5.item(), images.size(0)) - if args.dataset == 'ek100_cls': - vi = get_marginal_indexes(args.actions, 'verb') - ni = get_marginal_indexes(args.actions, 'noun') - verb_scores = torch.tensor(marginalize(output.detach().cpu().numpy(), vi)).cuda(args.gpu, non_blocking=True) - noun_scores = torch.tensor(marginalize(output.detach().cpu().numpy(), ni)).cuda(args.gpu, non_blocking=True) - target_to_verb = torch.tensor([args.mapping_act2v[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - target_to_noun = torch.tensor([args.mapping_act2n[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - acc1_verb, _ = accuracy(verb_scores, target_to_verb, topk=(1, 5)) - acc1_noun, _ = accuracy(noun_scores, target_to_noun, topk=(1, 5)) - top1_verb.update(acc1_verb.item(), images.size(0)) - top1_noun.update(acc1_noun.item(), images.size(0)) - else: - top1_verb.update(0., images.size(0)) - top1_noun.update(0., images.size(0)) - - # measure elapsed time - batch_time.update(time.time() - end) - end = time.time() - - mem.update(torch.cuda.max_memory_allocated() // 1e9) - - if optim_iter % args.print_freq == 0: - if dist_utils.is_main_process() and args.wandb: - wandb.log({ - 'acc1': top1.avg, 'acc5': top5.avg, 'loss': losses.avg, - 'acc1_verb': top1_verb.avg, 'acc1_noun': top1_noun.avg, - }) - progress.display(optim_iter) - progress.synchronize() - return { - 'acc1': top1.avg, 'acc5': top5.avg, 'loss': losses.avg, - 'acc1_verb': top1_verb.avg, 'acc1_noun': top1_noun.avg, - 'lr': optimizer.param_groups[0]['lr'], - } - - -def validate(val_loader, model, args): - batch_time = AverageMeter('Time', ':6.2f') - data_time = AverageMeter('Data', ':6.2f') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - progress = ProgressMeter( - len(val_loader), - [batch_time, top1, top5], - prefix='Test: ' - ) - - # switch to eval mode - model.eval() - if args.use_half: - model.half() - - all_outputs = [] - all_targets = [] - with torch.no_grad(): - end = time.time() - for i, (images, target) in enumerate(val_loader): - # measure data loading time - data_time.update(time.time() - end) - if isinstance(images, list): - logit_allcrops = [] - for crop in images: - crop = crop.cuda(args.gpu, non_blocking=True) - if args.use_half: - crop = crop.half() - logit = model(crop, use_checkpoint=args.use_checkpoint) - logit_allcrops.append(logit) - logit_allcrops = torch.stack(logit_allcrops, 0) - logit = logit_allcrops.mean(0) - logit = torch.softmax(logit, dim=1) - target = target.cuda(args.gpu, non_blocking=True) - - acc1, acc5 = accuracy(logit, target, topk=(1, 5)) - top1.update(acc1.item(), target.size(0)) - top5.update(acc5.item(), target.size(0)) - else: - images = images.cuda(args.gpu, non_blocking=True) - target = target.cuda(args.gpu, non_blocking=True) - if args.use_half: - images = images.half() - - logit = model(images, use_checkpoint=args.use_checkpoint) - logit = torch.softmax(logit, dim=1) - - acc1, acc5 = accuracy(logit, target, topk=(1, 5)) - top1.update(acc1.item(), images.size(0)) - top5.update(acc5.item(), images.size(0)) - - all_outputs.append(logit) - all_targets.append(target) - # measure elapsed time - batch_time.update(time.time() - end) - end = time.time() - - if i % args.print_freq == 0: - progress.display(i) - progress.synchronize() - if args.dataset == 'ek100_cls': - print('EK100 * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'.format(top1=top1, top5=top5)) - else: - print('EGTEA * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'.format(top1=top1, top5=top5)) - all_outputs = torch.cat(all_outputs).cpu().numpy() - all_targets = torch.cat(all_targets).cpu().numpy() - cm = confusion_matrix(all_targets, all_outputs.argmax(axis=1)) - mean_acc, acc = get_mean_accuracy(cm) - print('Mean Acc. = {:.3f}, Top-1 Acc. = {:.3f}'.format(mean_acc, acc)) - - if args.dataset == 'ek100_cls': - vi = get_marginal_indexes(args.actions, 'verb') - ni = get_marginal_indexes(args.actions, 'noun') - verb_scores = marginalize(all_outputs, vi) - noun_scores = marginalize(all_outputs, ni) - target_to_verb = np.array([args.mapping_act2v[a] for a in all_targets.tolist()]) - target_to_noun = np.array([args.mapping_act2n[a] for a in all_targets.tolist()]) - cm = confusion_matrix(target_to_verb, verb_scores.argmax(axis=1)) - _, acc = get_mean_accuracy(cm) - print('Verb Acc@1: {:.3f}'.format(acc)) - cm = confusion_matrix(target_to_noun, noun_scores.argmax(axis=1)) - _, acc = get_mean_accuracy(cm) - print('Noun Acc@1: {:.3f}'.format(acc)) - return {'acc1': top1.avg, 'acc5': top5.avg, 'mean_acc': mean_acc} - - -def validate_multihead(val_loader, model, args): - batch_time = AverageMeter('Time', ':6.2f') - data_time = AverageMeter('Data', ':6.2f') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - top1_verb = AverageMeter('Verb Acc@1', ':6.2f') - top1_noun = AverageMeter('Noun Acc@1', ':6.2f') - progress = ProgressMeter( - len(val_loader), - [batch_time, top1, top5, top1_verb, top1_noun], - prefix='Test: ' - ) - - # switch to eval mode - model.eval() - if args.use_half: - model.half() - - all_verb_outputs = [] - all_noun_outputs = [] - all_action_outputs = [] - all_verb_targets = [] - all_noun_targets = [] - all_action_targets = [] - with torch.no_grad(): - end = time.time() - for i, (images, target) in enumerate(val_loader): - # measure data loading time - data_time.update(time.time() - end) - if isinstance(images, torch.Tensor): - images = [images, ] - logit_verb_allcrops = [] - logit_noun_allcrops = [] - logit_action_allcrops = [] - for crop in images: - crop = crop.cuda(args.gpu, non_blocking=True) - if args.use_half: - crop = crop.half() - logit = model(crop, use_checkpoint=args.use_checkpoint) - logit_verb_allcrops.append(logit[0]) - logit_noun_allcrops.append(logit[1]) - logit_action_allcrops.append(logit[2]) - logit_verb_allcrops = torch.stack(logit_verb_allcrops, 0) - logit_noun_allcrops = torch.stack(logit_noun_allcrops, 0) - logit_action_allcrops = torch.stack(logit_action_allcrops, 0) - logit_verb = logit_verb_allcrops.mean(0) - logit_noun = logit_noun_allcrops.mean(0) - logit_action = logit_action_allcrops.mean(0) - logit_noun = torch.softmax(logit_noun, dim=1) - logit_verb = torch.softmax(logit_verb, dim=1) - logit_action = torch.softmax(logit_action, dim=1) - target = target.cuda(args.gpu, non_blocking=True) - target_to_verb = torch.tensor([args.mapping_act2v[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - target_to_noun = torch.tensor([args.mapping_act2n[a] for a in target.tolist()]).cuda(args.gpu, non_blocking=True) - - acc1, acc5 = accuracy(logit_action, target, topk=(1, 5)) - acc1_verb, _ = accuracy(logit_verb, target_to_verb, topk=(1, 5)) - acc1_noun, _ = accuracy(logit_noun, target_to_noun, topk=(1, 5)) - top1.update(acc1.item(), target.size(0)) - top5.update(acc5.item(), target.size(0)) - top1_verb.update(acc1_verb.item(), target_to_verb.size(0)) - top1_noun.update(acc1_noun.item(), target_to_noun.size(0)) - - all_verb_outputs.append(logit_verb) - all_noun_outputs.append(logit_noun) - all_action_outputs.append(logit_action) - all_verb_targets.append(target_to_verb) - all_noun_targets.append(target_to_noun) - all_action_targets.append(target) - # measure elapsed time - batch_time.update(time.time() - end) - end = time.time() - - if i % args.print_freq == 0: - progress.display(i) - progress.synchronize() - print('EK100 * Verb Acc@1 {top1.avg:.3f}'.format(top1=top1_verb)) - print('EK100 * Noun Acc@1 {top1.avg:.3f}'.format(top1=top1_noun)) - print('EK100 * Action Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'.format(top1=top1, top5=top5)) - return {'acc1': top1.avg, 'acc5': top5.avg, 'acc1_verb': top1_verb.avg, 'acc1_noun': top1_noun.avg} - - -if __name__ == '__main__': - parser = argparse.ArgumentParser('lavila finetune and evaluation', parents=[get_args_parser()]) - args = parser.parse_args() - os.makedirs(args.output_dir, exist_ok=True) - main(args) diff --git a/spaces/ncats/EpiPipeline4RD/README.md b/spaces/ncats/EpiPipeline4RD/README.md deleted file mode 100644 index 04cf13c9c3b5c0da69bb9ce76efa8cd419a37ac4..0000000000000000000000000000000000000000 --- a/spaces/ncats/EpiPipeline4RD/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: EpiPipeline4RD -emoji: ⚕ -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -models: ncats/EpiExtract4GARD-v2 -pinned: true ---- \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My-Horse-And-Me-MULTI8-NoDVD-Without-Human-Verification-FREE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My-Horse-And-Me-MULTI8-NoDVD-Without-Human-Verification-FREE.md deleted file mode 100644 index 6d145530b5c476ed289f07a99fd7db799a681b5c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My-Horse-And-Me-MULTI8-NoDVD-Without-Human-Verification-FREE.md +++ /dev/null @@ -1,51 +0,0 @@ -## My Horse And Me [MULTI8] No-DVD Without Human Verification - - - -**Download > [https://kneedacexbrew.blogspot.com/?d=2tw0FZ](https://kneedacexbrew.blogspot.com/?d=2tw0FZ)** - - - -# How to Play My Horse and Me Without a DVD - - - -My Horse and Me is a simulation game that lets you experience the bond between you and your horse. You can customize your character and your horse, compete in various events, and explore different locations. However, if you want to play the game without inserting the DVD every time, you will need to use a No-DVD/Fixed Image file. - - - -A No-DVD/Fixed Image file is a modified version of the game's executable file that bypasses the DVD check. This way, you can play the game without having to insert the original DVD in your drive. However, some No-DVD/Fixed Image files may not work with certain versions of the game or may be detected as modified by online servers. Therefore, you should always backup your original files before using a No-DVD/Fixed Image file. - - - -There are several sources where you can download No-DVD/Fixed Image files for My Horse and Me. One of them is GameCopyWorld, a website that provides game fixes, trainers, cheats, and patches for various PC games. You can find three different No-DVD/Fixed Image files for My Horse and Me on GameCopyWorld, each with different requirements and instructions. You can choose the one that suits your game version and system best. - - - -To use a No-DVD/Fixed Image file from GameCopyWorld, you will need to install the game first. Then, you will need to download and mount the No-DVD/Fixed Image file using a virtual drive program such as DAEMON Tools. A virtual drive program allows you to create a virtual CD/DVD drive on your computer and mount an image file as if it were a physical disc. This way, the game will think that you have inserted the original DVD in your drive. - - - -However, before you mount the No-DVD/Fixed Image file, you may need to disable or uninstall any CD/DVD emulation software or tools that may interfere with the game's protection system. Some examples of such software or tools are Alcohol 120%, CloneCD, Nero Burning ROM, BlindWrite, etc. You may also need to update your DAEMON Tools version or use a specific driver depending on the No-DVD/Fixed Image file you choose. - - - -Once you have mounted the No-DVD/Fixed Image file, you can launch the game and enjoy it without having to insert the DVD every time. However, keep in mind that some No-DVD/Fixed Image files may not work with online features or multiplayer modes of the game. If you want to play online or update the game to a newer version, you may need to restore your original files or use a different No-DVD/Fixed Image file. - - - -Playing My Horse and Me without a DVD can be convenient and easy if you follow these steps. However, you should always respect the game's copyright and use a No-DVD/Fixed Image file only if you own a legitimate copy of the game. - - - -My Horse and Me is not just a game about riding a horse, but also a game about developing a bond with your horse. You can choose from different breeds and colors of horses, and customize your own character and outfits. You can also interact with your horse in various ways, such as feeding, grooming, petting, and playing. The more you care for your horse, the more trust and affection you will build. - - - -The game also features different modes and challenges that test your riding skills and knowledge. You can compete in the Championship Mode, where you can enter various events such as dressage, cross country, and show jumping. You can also play the Riding Games, where you can have fun with your horse in different mini-games such as collecting butterflies, herding chickens, or catching stars. You can also explore different locations such as the farm, the forest, or the beach. - - - -My Horse and Me is a game that appeals to both casual and hardcore horse lovers. It offers a realistic and immersive experience of owning and riding a horse, as well as a fun and engaging gameplay. Whether you want to learn more about horses, practice your riding skills, or just have fun with your horse, My Horse and Me is the game for you. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Roboform.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Roboform.md deleted file mode 100644 index 83acf4f59decefaf285185f373c56717c935a930..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Roboform.md +++ /dev/null @@ -1,60 +0,0 @@ - -

    PATCHED RoboForm: How to Update Your Password Manager

    -

    If you use RoboForm to store and manage your passwords, you may want to update it regularly to get the latest features and security improvements. Updating RoboForm is easy and can be done in a few steps, depending on your device and browser. In this article, we will show you how to update your PATCHED RoboForm on Windows, Mac, iOS, Android, Linux, and Chromebook.

    -

    PATCHED Roboform


    Download ✯✯✯ https://urlcod.com/2uIc6k



    -

    Why Update RoboForm?

    -

    RoboForm is a password manager that securely stores all of your passwords and logs you in with a single click or tap. It also saves time by filling out personal and billing information on long web forms. RoboForm is available for Windows, Mac, iOS, Android, Linux, and Chromebook, with support for all their respective major browsers, including Microsoft Edge[^2^] [^3^].

    -

    Updating RoboForm ensures that you have the most recent version of the software, which may include new features, bug fixes, performance improvements, and security enhancements. For example, the latest version of RoboForm for Windows (v9.4.6) adds offline access, keeps you signed in on all installed browsers, supports Windows Hello, and allows you to log in to Windows applications[^2^].

    -

    How to Update RoboForm on Windows

    -

    To update RoboForm on Windows, follow these steps[^1^]:

    -
      -
    1. Click the [ ^ ] icon in the lower right corner of your screen.
    2. -
    3. Click the RoboForm icon.
    4. -
    5. Select “Help” from the menu.
    6. -
    7. Click the “Check for Update” option.
    8. -
    9. If an update is available, you will receive an update message prompting you to install. Click "Yes" if an update is available, or "No" if your version is up to date.
    10. -
    -

    Note: If you previously selected "Notify about new versions" (see images below), update notifications will automatically pop up when available. If you would like to receive automatic notifications and do not currently, follow these steps:

    -
      -
    1. Click the RoboForm icon in the toolbar.
    2. -
    3. Select “Help” from the menu.
    4. -
    5. Click the “About” option.
    6. -
    7. Check the box labeled "Notify about new versions" to enable update notifications.
    8. -
    - -

    How to Update RoboForm on Mac

    -

    To update RoboForm on Mac, follow these steps[^1^]:

    -
      -
    1. Click the RoboForm icon in the upper right corner of your screen in your Menu Bar.
    2. -
    3. Click the 3 dots in the upper right corner.
    4. -
    5. Select "Open Desktop Editor" from the menu.
    6. -
    7. After opening the Editor, in the upper left corner of the Menu Bar you should now see a "RoboForm" menu. Click this menu.
    8. -
    9. Select “Check for Update" from the menu.
    10. -
    - -

    Note: You may alternatively update your RoboForm desktop application through the App Store by following these steps:

    - -
      -
    1. Open the App Store.
    2. -
    3. Select the “Updates” tab from the column on the left.
    4. -
    5. If RoboForm has an update available, it will be listed there.
    6. -
    - -

    How to Update RoboForm on iOS and Android

    - -

    To update RoboForm on iOS and Android devices, you need to check for updates through the App Store or Google Play Store respectively. Follow these steps:

    - -
      -
    1. Open the App Store or Google Play Store on your device.
    2. -
    3. Search for RoboForm or go to your list of installed apps.
    4. -
    5. If there is an update available for RoboForm, tap on it and then tap on "Update".
    6. -
    - -

    How to Update RoboForm on Linux and Chromebook

    - -

    To update RoboForm on Linux and Chromebook devices, you need to download and install the latest version of RoboForm from their website. Follow these steps:

    -

    - -

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/neurotech/cat_dog_audio_classifier/README.md b/spaces/neurotech/cat_dog_audio_classifier/README.md deleted file mode 100644 index 4414925d6d1d771d0de9158e56adbca3faadb67d..0000000000000000000000000000000000000000 --- a/spaces/neurotech/cat_dog_audio_classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cat_dog_audio_classifier -emoji: 🌖 -colorFrom: purple -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/nightfury/Stable_Diffusion_2/index.html b/spaces/nightfury/Stable_Diffusion_2/index.html deleted file mode 100644 index 20fe4ce6c90589d4c95f6900feb1fc0b9f9d4da0..0000000000000000000000000000000000000000 --- a/spaces/nightfury/Stable_Diffusion_2/index.html +++ /dev/null @@ -1,13 +0,0 @@ - - - - - - Stable Diffusion 2 - - - - - - - \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/transforms/augmentation_impl.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/transforms/augmentation_impl.py deleted file mode 100644 index 7cc7b28be66cdf14bff493745c6c567da55aeb34..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/transforms/augmentation_impl.py +++ /dev/null @@ -1,736 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Implement many useful :class:`Augmentation`. -""" -import numpy as np -import sys -from numpy import random -from typing import Tuple -import torch -from fvcore.transforms.transform import ( - BlendTransform, - CropTransform, - HFlipTransform, - NoOpTransform, - PadTransform, - Transform, - TransformList, - VFlipTransform, -) -from PIL import Image - -from detectron2.structures import Boxes, pairwise_iou - -from .augmentation import Augmentation, _transform_to_aug -from .transform import ExtentTransform, ResizeTransform, RotationTransform - -__all__ = [ - "FixedSizeCrop", - "RandomApply", - "RandomBrightness", - "RandomContrast", - "RandomCrop", - "RandomExtent", - "RandomFlip", - "RandomSaturation", - "RandomLighting", - "RandomRotation", - "Resize", - "ResizeScale", - "ResizeShortestEdge", - "RandomCrop_CategoryAreaConstraint", - "RandomResize", - "MinIoURandomCrop", -] - - -class RandomApply(Augmentation): - """ - Randomly apply an augmentation with a given probability. - """ - - def __init__(self, tfm_or_aug, prob=0.5): - """ - Args: - tfm_or_aug (Transform, Augmentation): the transform or augmentation - to be applied. It can either be a `Transform` or `Augmentation` - instance. - prob (float): probability between 0.0 and 1.0 that - the wrapper transformation is applied - """ - super().__init__() - self.aug = _transform_to_aug(tfm_or_aug) - assert 0.0 <= prob <= 1.0, f"Probablity must be between 0.0 and 1.0 (given: {prob})" - self.prob = prob - - def get_transform(self, *args): - do = self._rand_range() < self.prob - if do: - return self.aug.get_transform(*args) - else: - return NoOpTransform() - - def __call__(self, aug_input): - do = self._rand_range() < self.prob - if do: - return self.aug(aug_input) - else: - return NoOpTransform() - - -class RandomFlip(Augmentation): - """ - Flip the image horizontally or vertically with the given probability. - """ - - def __init__(self, prob=0.5, *, horizontal=True, vertical=False): - """ - Args: - prob (float): probability of flip. - horizontal (boolean): whether to apply horizontal flipping - vertical (boolean): whether to apply vertical flipping - """ - super().__init__() - - if horizontal and vertical: - raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.") - if not horizontal and not vertical: - raise ValueError("At least one of horiz or vert has to be True!") - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - do = self._rand_range() < self.prob - if do: - if self.horizontal: - return HFlipTransform(w) - elif self.vertical: - return VFlipTransform(h) - else: - return NoOpTransform() - - -class Resize(Augmentation): - """Resize image to a fixed target size""" - - def __init__(self, shape, interp=Image.BILINEAR): - """ - Args: - shape: (h, w) tuple or a int - interp: PIL interpolation method - """ - if isinstance(shape, int): - shape = (shape, shape) - shape = tuple(shape) - self._init(locals()) - - def get_transform(self, image): - return ResizeTransform( - image.shape[0], image.shape[1], self.shape[0], self.shape[1], self.interp - ) - - -class ResizeShortestEdge(Augmentation): - """ - Resize the image while keeping the aspect ratio unchanged. - It attempts to scale the shorter edge to the given `short_edge_length`, - as long as the longer edge does not exceed `max_size`. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - @torch.jit.unused - def __init__( - self, short_edge_length, max_size=sys.maxsize, sample_style="range", interp=Image.BILINEAR - ): - """ - Args: - short_edge_length (list[int]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the shortest edge length. - If ``sample_style=="choice"``, a list of shortest edge lengths to sample from. - max_size (int): maximum allowed longest edge length. - sample_style (str): either "range" or "choice". - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - - self.is_range = sample_style == "range" - if isinstance(short_edge_length, int): - short_edge_length = (short_edge_length, short_edge_length) - if self.is_range: - assert len(short_edge_length) == 2, ( - "short_edge_length must be two values using 'range' sample style." - f" Got {short_edge_length}!" - ) - self._init(locals()) - - @torch.jit.unused - def get_transform(self, image): - h, w = image.shape[:2] - if self.is_range: - size = np.random.randint(self.short_edge_length[0], self.short_edge_length[1] + 1) - else: - size = np.random.choice(self.short_edge_length) - if size == 0: - return NoOpTransform() - - newh, neww = ResizeShortestEdge.get_output_shape(h, w, size, self.max_size) - return ResizeTransform(h, w, newh, neww, self.interp) - - @staticmethod - def get_output_shape( - oldh: int, oldw: int, short_edge_length: int, max_size: int - ) -> Tuple[int, int]: - """ - Compute the output size given input size and target short edge length. - """ - h, w = oldh, oldw - size = short_edge_length * 1.0 - scale = size / min(h, w) - if h < w: - newh, neww = size, scale * w - else: - newh, neww = scale * h, size - if max(newh, neww) > max_size: - scale = max_size * 1.0 / max(newh, neww) - newh = newh * scale - neww = neww * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) - - -class ResizeScale(Augmentation): - """ - Takes target size as input and randomly scales the given target size between `min_scale` - and `max_scale`. It then scales the input image such that it fits inside the scaled target - box, keeping the aspect ratio constant. - This implements the resize part of the Google's 'resize_and_crop' data augmentation: - https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/input_utils.py#L127 - """ - - def __init__( - self, - min_scale: float, - max_scale: float, - target_height: int, - target_width: int, - interp: int = Image.BILINEAR, - ): - """ - Args: - min_scale: minimum image scale range. - max_scale: maximum image scale range. - target_height: target image height. - target_width: target image width. - interp: image interpolation method. - """ - super().__init__() - self._init(locals()) - - def _get_resize(self, image: np.ndarray, scale: float) -> Transform: - input_size = image.shape[:2] - - # Compute new target size given a scale. - target_size = (self.target_height, self.target_width) - target_scale_size = np.multiply(target_size, scale) - - # Compute actual rescaling applied to input image and output size. - output_scale = np.minimum( - target_scale_size[0] / input_size[0], target_scale_size[1] / input_size[1] - ) - output_size = np.round(np.multiply(input_size, output_scale)).astype(int) - - return ResizeTransform( - input_size[0], input_size[1], int(output_size[0]), int(output_size[1]), self.interp - ) - - def get_transform(self, image: np.ndarray) -> Transform: - random_scale = np.random.uniform(self.min_scale, self.max_scale) - return self._get_resize(image, random_scale) - - -class RandomRotation(Augmentation): - """ - This method returns a copy of this image, rotated the given - number of degrees counter clockwise around the given center. - """ - - def __init__(self, angle, expand=True, center=None, sample_style="range", interp=None): - """ - Args: - angle (list[float]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the angle (in degrees). - If ``sample_style=="choice"``, a list of angles to sample from - expand (bool): choose if the image should be resized to fit the whole - rotated image (default), or simply cropped - center (list[[float, float]]): If ``sample_style=="range"``, - a [[minx, miny], [maxx, maxy]] relative interval from which to sample the center, - [0, 0] being the top left of the image and [1, 1] the bottom right. - If ``sample_style=="choice"``, a list of centers to sample from - Default: None, which means that the center of rotation is the center of the image - center has no effect if expand=True because it only affects shifting - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - self.is_range = sample_style == "range" - if isinstance(angle, (float, int)): - angle = (angle, angle) - if center is not None and isinstance(center[0], (float, int)): - center = (center, center) - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - center = None - if self.is_range: - angle = np.random.uniform(self.angle[0], self.angle[1]) - if self.center is not None: - center = ( - np.random.uniform(self.center[0][0], self.center[1][0]), - np.random.uniform(self.center[0][1], self.center[1][1]), - ) - else: - angle = np.random.choice(self.angle) - if self.center is not None: - center = np.random.choice(self.center) - - if center is not None: - center = (w * center[0], h * center[1]) # Convert to absolute coordinates - - if angle % 360 == 0: - return NoOpTransform() - - return RotationTransform(h, w, angle, expand=self.expand, center=center, interp=self.interp) - - -class FixedSizeCrop(Augmentation): - """ - If `crop_size` is smaller than the input image size, then it uses a random crop of - the crop size. If `crop_size` is larger than the input image size, then it pads - the right and the bottom of the image to the crop size if `pad` is True, otherwise - it returns the smaller image. - """ - - def __init__( - self, - crop_size: Tuple[int], - pad: bool = True, - pad_value: float = 128.0, - seg_pad_value: int = 255, - ): - """ - Args: - crop_size: target image (height, width). - pad: if True, will pad images smaller than `crop_size` up to `crop_size` - pad_value: the padding value to the image. - seg_pad_value: the padding value to the segmentation mask. - """ - super().__init__() - self._init(locals()) - - def _get_crop(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add random crop if the image is scaled up. - max_offset = np.subtract(input_size, output_size) - max_offset = np.maximum(max_offset, 0) - offset = np.multiply(max_offset, np.random.uniform(0.0, 1.0)) - offset = np.round(offset).astype(int) - return CropTransform( - offset[1], offset[0], output_size[1], output_size[0], input_size[1], input_size[0] - ) - - def _get_pad(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add padding if the image is scaled down. - pad_size = np.subtract(output_size, input_size) - pad_size = np.maximum(pad_size, 0) - original_size = np.minimum(input_size, output_size) - return PadTransform( - 0, - 0, - pad_size[1], - pad_size[0], - original_size[1], - original_size[0], - self.pad_value, - self.seg_pad_value, - ) - - def get_transform(self, image: np.ndarray) -> TransformList: - transforms = [self._get_crop(image)] - if self.pad: - transforms.append(self._get_pad(image)) - return TransformList(transforms) - - -class RandomCrop(Augmentation): - """ - Randomly crop a rectangle region out of an image. - """ - - def __init__(self, crop_type: str, crop_size): - """ - Args: - crop_type (str): one of "relative_range", "relative", "absolute", "absolute_range". - crop_size (tuple[float, float]): two floats, explained below. - - - "relative": crop a (H * crop_size[0], W * crop_size[1]) region from an input image of - size (H, W). crop size should be in (0, 1] - - "relative_range": uniformly sample two values from [crop_size[0], 1] - and [crop_size[1]], 1], and use them as in "relative" crop type. - - "absolute" crop a (crop_size[0], crop_size[1]) region from input image. - crop_size must be smaller than the input image size. - - "absolute_range", for an input of size (H, W), uniformly sample H_crop in - [crop_size[0], min(H, crop_size[1])] and W_crop in [crop_size[0], min(W, crop_size[1])]. - Then crop a region (H_crop, W_crop). - """ - # TODO style of relative_range and absolute_range are not consistent: - # one takes (h, w) but another takes (min, max) - super().__init__() - assert crop_type in ["relative_range", "relative", "absolute", "absolute_range"] - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - croph, cropw = self.get_crop_size((h, w)) - assert h >= croph and w >= cropw, "Shape computation in {} has bugs.".format(self) - h0 = np.random.randint(h - croph + 1) - w0 = np.random.randint(w - cropw + 1) - return CropTransform(w0, h0, cropw, croph) - - def get_crop_size(self, image_size): - """ - Args: - image_size (tuple): height, width - - Returns: - crop_size (tuple): height, width in absolute pixels - """ - h, w = image_size - if self.crop_type == "relative": - ch, cw = self.crop_size - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "relative_range": - crop_size = np.asarray(self.crop_size, dtype=np.float32) - ch, cw = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "absolute": - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == "absolute_range": - assert self.crop_size[0] <= self.crop_size[1] - ch = np.random.randint(min(h, self.crop_size[0]), min(h, self.crop_size[1]) + 1) - cw = np.random.randint(min(w, self.crop_size[0]), min(w, self.crop_size[1]) + 1) - return ch, cw - else: - raise NotImplementedError("Unknown crop type {}".format(self.crop_type)) - - -class RandomCrop_CategoryAreaConstraint(Augmentation): - """ - Similar to :class:`RandomCrop`, but find a cropping window such that no single category - occupies a ratio of more than `single_category_max_area` in semantic segmentation ground - truth, which can cause unstability in training. The function attempts to find such a valid - cropping window for at most 10 times. - """ - - def __init__( - self, - crop_type: str, - crop_size, - single_category_max_area: float = 1.0, - ignored_category: int = None, - ): - """ - Args: - crop_type, crop_size: same as in :class:`RandomCrop` - single_category_max_area: the maximum allowed area ratio of a - category. Set to 1.0 to disable - ignored_category: allow this category in the semantic segmentation - ground truth to exceed the area ratio. Usually set to the category - that's ignored in training. - """ - self.crop_aug = RandomCrop(crop_type, crop_size) - self._init(locals()) - - def get_transform(self, image, sem_seg): - if self.single_category_max_area >= 1.0: - return self.crop_aug.get_transform(image) - else: - h, w = sem_seg.shape - for _ in range(10): - crop_size = self.crop_aug.get_crop_size((h, w)) - y0 = np.random.randint(h - crop_size[0] + 1) - x0 = np.random.randint(w - crop_size[1] + 1) - sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]] - labels, cnt = np.unique(sem_seg_temp, return_counts=True) - if self.ignored_category is not None: - cnt = cnt[labels != self.ignored_category] - if len(cnt) > 1 and np.max(cnt) < np.sum(cnt) * self.single_category_max_area: - break - crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0]) - return crop_tfm - - -class RandomExtent(Augmentation): - """ - Outputs an image by cropping a random "subrect" of the source image. - - The subrect can be parameterized to include pixels outside the source image, - in which case they will be set to zeros (i.e. black). The size of the output - image will vary with the size of the random subrect. - """ - - def __init__(self, scale_range, shift_range): - """ - Args: - output_size (h, w): Dimensions of output image - scale_range (l, h): Range of input-to-output size scaling factor - shift_range (x, y): Range of shifts of the cropped subrect. The rect - is shifted by [w / 2 * Uniform(-x, x), h / 2 * Uniform(-y, y)], - where (w, h) is the (width, height) of the input image. Set each - component to zero to crop at the image's center. - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - img_h, img_w = image.shape[:2] - - # Initialize src_rect to fit the input image. - src_rect = np.array([-0.5 * img_w, -0.5 * img_h, 0.5 * img_w, 0.5 * img_h]) - - # Apply a random scaling to the src_rect. - src_rect *= np.random.uniform(self.scale_range[0], self.scale_range[1]) - - # Apply a random shift to the coordinates origin. - src_rect[0::2] += self.shift_range[0] * img_w * (np.random.rand() - 0.5) - src_rect[1::2] += self.shift_range[1] * img_h * (np.random.rand() - 0.5) - - # Map src_rect coordinates into image coordinates (center at corner). - src_rect[0::2] += 0.5 * img_w - src_rect[1::2] += 0.5 * img_h - - return ExtentTransform( - src_rect=(src_rect[0], src_rect[1], src_rect[2], src_rect[3]), - output_size=(int(src_rect[3] - src_rect[1]), int(src_rect[2] - src_rect[0])), - ) - - -class RandomContrast(Augmentation): - """ - Randomly transforms image contrast. - - Contrast intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce contrast - - intensity = 1 will preserve the input image - - intensity > 1 will increase contrast - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=image.mean(), src_weight=1 - w, dst_weight=w) - - -class RandomBrightness(Augmentation): - """ - Randomly transforms image brightness. - - Brightness intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce brightness - - intensity = 1 will preserve the input image - - intensity > 1 will increase brightness - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=0, src_weight=1 - w, dst_weight=w) - - -class RandomSaturation(Augmentation): - """ - Randomly transforms saturation of an RGB image. - Input images are assumed to have 'RGB' channel order. - - Saturation intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce saturation (make the image more grayscale) - - intensity = 1 will preserve the input image - - intensity > 1 will increase saturation - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation (1 preserves input). - intensity_max (float): Maximum augmentation (1 preserves input). - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomSaturation only works on RGB images" - w = np.random.uniform(self.intensity_min, self.intensity_max) - grayscale = image.dot([0.299, 0.587, 0.114])[:, :, np.newaxis] - return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w) - - -class RandomLighting(Augmentation): - """ - The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet. - Input images are assumed to have 'RGB' channel order. - - The degree of color jittering is randomly sampled via a normal distribution, - with standard deviation given by the scale parameter. - """ - - def __init__(self, scale): - """ - Args: - scale (float): Standard deviation of principal component weighting. - """ - super().__init__() - self._init(locals()) - self.eigen_vecs = np.array( - [[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203]] - ) - self.eigen_vals = np.array([0.2175, 0.0188, 0.0045]) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomLighting only works on RGB images" - weights = np.random.normal(scale=self.scale, size=3) - return BlendTransform( - src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0 - ) - - -class RandomResize(Augmentation): - """Randomly resize image to a target size in shape_list""" - - def __init__(self, shape_list, interp=Image.BILINEAR): - """ - Args: - shape_list: a list of shapes in (h, w) - interp: PIL interpolation method - """ - self.shape_list = shape_list - self._init(locals()) - - def get_transform(self, image): - shape_idx = np.random.randint(low=0, high=len(self.shape_list)) - h, w = self.shape_list[shape_idx] - return ResizeTransform(image.shape[0], image.shape[1], h, w, self.interp) - - -class MinIoURandomCrop(Augmentation): - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size) - mode_trials: number of trials for sampling min_ious threshold - crop_trials: number of trials for sampling crop_size after cropping - """ - - def __init__( - self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - mode_trials=1000, - crop_trials=50, - ): - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.mode_trials = mode_trials - self.crop_trials = crop_trials - - def get_transform(self, image, boxes): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - boxes: ground truth boxes in (x1, y1, x2, y2) format - """ - if boxes is None: - return NoOpTransform() - h, w, c = image.shape - for _ in range(self.mode_trials): - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return NoOpTransform() - - min_iou = mode - for _ in range(self.crop_trials): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array((int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = pairwise_iou( - Boxes(patch.reshape(-1, 4)), Boxes(boxes.reshape(-1, 4)) - ).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ( - (center[:, 0] > patch[0]) - * (center[:, 1] > patch[1]) - * (center[:, 0] < patch[2]) - * (center[:, 1] < patch[3]) - ) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - return CropTransform(int(left), int(top), int(new_w), int(new_h)) diff --git a/spaces/nikitalokhmachev-ai/line-art-colorization/data_utils.py b/spaces/nikitalokhmachev-ai/line-art-colorization/data_utils.py deleted file mode 100644 index 7ee421b0cd1f49667ccf3fa7cf9593c30d6cf1db..0000000000000000000000000000000000000000 --- a/spaces/nikitalokhmachev-ai/line-art-colorization/data_utils.py +++ /dev/null @@ -1,280 +0,0 @@ -# + -import os -import math -import random -import numbers -import requests -import shutil -import numpy as np -import scipy.stats as stats -from PIL import Image -from tqdm.auto import tqdm - -from xdog import to_sketch -# - - -import torch -import torch.nn as nn -import torch.utils.data as data -from torch.utils.data.sampler import Sampler - -from torchvision import transforms -from torchvision.transforms import Resize, CenterCrop - -mu, sigma = 1, 0.005 -X = stats.truncnorm((0 - mu) / sigma, (1 - mu) / sigma, loc=mu, scale=sigma) - -denormalize = transforms.Compose([ transforms.Normalize(mean = [ 0., 0., 0. ], - std = [ 1/0.5, 1/0.5, 1/0.5 ]), - transforms.Normalize(mean = [ -0.5, -0.5, -0.5 ], - std = [ 1., 1., 1. ]),]) - -etrans = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.5), (0.5)) -]) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -def predict_img(gen, sk, hnt = None): - #sk = Image.open(sketch_path).convert('L') - sk = etrans(sk) - - pad_w = 16 - sk.shape[1] % 16 if sk.shape[1] % 16 != 0 else 0 - pad_h = 16 - sk.shape[2] % 16 if sk.shape[2] % 16 != 0 else 0 - pad = nn.ZeroPad2d((pad_h, 0, pad_w, 0)) - sk = pad(sk) - - sk = sk.unsqueeze(0) - sk = sk.to(device) - - if hnt == None: - hnt = torch.zeros((1, 4, sk.shape[2]//4, sk.shape[3]//4)) - - hnt = hnt.to(device) - - img_gen = gen(sk, hnt, sketch_feat=None).squeeze(0) - img_gen = denormalize(img_gen) * 255 - img_gen = img_gen.permute(1,2,0).detach().cpu().numpy().astype(np.uint8) - #return img_gen[pad_w:, pad_h:] - return Image.fromarray(img_gen[pad_w:, pad_h:]) - -def files(img_path, img_size=512): - img_path = os.path.abspath(img_path) - line_widths = sorted([el for el in os.listdir(os.path.join(img_path, 'pics_sketch')) if el != '.ipynb_checkpoints']) - images_names = sorted([el for el in os.listdir(os.path.join(img_path, 'pics_sketch', line_widths[0])) if '.jpg' in el]) - - images_names = [el for el in images_names if np.all(np.array(Image.open(os.path.join(img_path, 'pics', el)).size) >= np.array([img_size, img_size]))] - - images_color = [os.path.join(img_path, 'pics', el) for el in images_names] - images_sketch = {line_width:[os.path.join(img_path, 'pics_sketch', line_width, el) for el in images_names] for line_width in line_widths} - return images_color, images_sketch - -def mask_gen(img_size=512, bs=4): - maskS = img_size // 4 - - mask1 = torch.cat([torch.rand(1, 1, maskS, maskS).ge(X.rvs(1)[0]).float() for _ in range(bs // 2)], 0) - mask2 = torch.cat([torch.zeros(1, 1, maskS, maskS).float() for _ in range(bs // 2)], 0) - mask = torch.cat([mask1, mask2], 0) - return mask - -def jitter(x): - ran = random.uniform(0.7, 1) - return x * ran + 1 - ran - -def make_trans(img_size): - vtrans = transforms.Compose([ - RandomSizedCrop(img_size // 4, Image.BICUBIC), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ]) - - ctrans = transforms.Compose([ - transforms.Resize(img_size, Image.BICUBIC), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ]) - - strans = transforms.Compose([ - transforms.Resize(img_size, Image.BICUBIC), - transforms.ToTensor(), - transforms.Lambda(jitter), - transforms.Normalize((0.5), (0.5)) - ]) - - return vtrans, ctrans, strans - -class RandomCrop(object): - """Crops the given PIL.Image at a random location to have a region of - the given size. size can be a tuple (target_height, target_width) - or an integer, in which case the target will be of a square shape (size, size) - """ - - def __init__(self, size): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - - def __call__(self, img1, img2): - w, h = img1.size - th, tw = self.size - if w == tw and h == th: # ValueError: empty range for randrange() (0,0, 0) - return img1, img2 - - if w == tw: - x1 = 0 - y1 = random.randint(0, h - th) - return img1.crop((x1, y1, x1 + tw, y1 + th)), img2.crop((x1, y1, x1 + tw, y1 + th)) - - elif h == th: - x1 = random.randint(0, w - tw) - y1 = 0 - return img1.crop((x1, y1, x1 + tw, y1 + th)), img2.crop((x1, y1, x1 + tw, y1 + th)) - - else: - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - return img1.crop((x1, y1, x1 + tw, y1 + th)), img2.crop((x1, y1, x1 + tw, y1 + th)) - -class RandomSizedCrop(object): - """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size - and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio - This is popularly used to train the Inception networks - size: size of the smaller edge - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, interpolation=Image.BICUBIC): - self.size = size - self.interpolation = interpolation - - def __call__(self, img): - for attempt in range(10): - area = img.size[0] * img.size[1] - target_area = random.uniform(0.9, 1.) * area - aspect_ratio = random.uniform(7. / 8, 8. / 7) - - w = int(round(math.sqrt(target_area * aspect_ratio))) - h = int(round(math.sqrt(target_area / aspect_ratio))) - - if random.random() < 0.5: - w, h = h, w - - if w <= img.size[0] and h <= img.size[1]: - x1 = random.randint(0, img.size[0] - w) - y1 = random.randint(0, img.size[1] - h) - - img = img.crop((x1, y1, x1 + w, y1 + h)) - assert (img.size == (w, h)) - - return img.resize((self.size, self.size), self.interpolation) - - # Fallback - Resize = Resize(self.size, interpolation=self.interpolation) - crop = CenterCrop(self.size) - return crop(Resize(img)) - - -class ImageFolder(data.Dataset): - def __init__(self, img_path, img_size): - - self.images_color, self.images_sketch = files(img_path, img_size) - if (any([self.images_sketch[key] == 0 for key in self.images_sketch])) or (len(self.images_color) == 0): - raise (RuntimeError("Found 0 images in one of the folders.")) - if any([len(self.images_sketch[key]) != len(self.images_color) for key in self.images_sketch]): - raise (RuntimeError("The number of sketches is not equal to the number of colorized images.")) - self.img_path = img_path - self.img_size = img_size - self.vtrans, self.ctrans, self.strans = make_trans(img_size) - - def __getitem__(self, index): - color = Image.open(self.images_color[index]).convert('RGB') - - random_line_width = random.choice(list(self.images_sketch.keys())) - sketch = Image.open(self.images_sketch[random_line_width][index]).convert('L') - #the image can be smaller than img_size, fix! - color, sketch = RandomCrop(self.img_size)(color, sketch) - if random.random() < 0.5: - color, sketch = color.transpose(Image.FLIP_LEFT_RIGHT), sketch.transpose(Image.FLIP_LEFT_RIGHT) - - color, color_down, sketch = self.ctrans(color), self.vtrans(color), self.strans(sketch) - - return color, color_down, sketch - - def __len__(self): - return len(self.images_color) - - -class GivenIterationSampler(Sampler): - def __init__(self, dataset, total_iter, batch_size, diter, last_iter=-1): - self.dataset = dataset - self.total_iter = total_iter - self.batch_size = batch_size - self.diter = diter - self.last_iter = last_iter - - self.total_size = self.total_iter * self.batch_size * (self.diter + 1) - - self.indices = self.gen_new_list() - self.call = 0 - - - def __iter__(self): - #if self.call == 0: - #self.call = 1 - return iter(self.indices[(self.last_iter + 1) * self.batch_size * (self.diter + 1):]) - #else: - # raise RuntimeError("this sampler is not designed to be called more than once!!") - - def gen_new_list(self): - # each process shuffle all list with same seed - np.random.seed(0) - - indices = np.arange(len(self.dataset)) - indices = indices[:self.total_size] - num_repeat = (self.total_size - 1) // indices.shape[0] + 1 - indices = np.tile(indices, num_repeat) - indices = indices[:self.total_size] - - np.random.shuffle(indices) - assert len(indices) == self.total_size - return indices - - def __len__(self): - # note here we do not take last iter into consideration, since __len__ - # should only be used for displaying, the correct remaining size is - # handled by dataloader - # return self.total_size - (self.last_iter+1)*self.batch_size - return self.total_size - -def get_dataloader(img_path, img_size=512, seed=0, total_iter=250000, bs=4, diters=1, last_iter=-1): - - random.seed(seed) - - train_dataset = ImageFolder(img_path=img_path, img_size=img_size) - - train_sampler = GivenIterationSampler(train_dataset, total_iter, bs, diters, last_iter=last_iter) - - return data.DataLoader(train_dataset, batch_size=bs, shuffle=False, pin_memory=True, num_workers=4, sampler=train_sampler) - - -def get_data(links, img_path='alacgan_data', line_widths=[0.3, 0.5]): - c = 0 - - for line_width in line_widths: - lw = str(line_width) - if lw not in os.listdir(os.path.join(img_path, 'pics_sketch')): - os.mkdir(os.path.join(img_path, 'pics_sketch', lw)) - else: - shutil.rmtree(os.path.join(img_path, 'pics_sketch', lw)) - os.mkdir(os.path.join(img_path, 'pics_sketch', lw)) - - for link in tqdm(links): - img_orig = Image.open(requests.get(link, stream=True).raw).convert('RGB') - img_orig.save(os.path.join(img_path, 'pics', str(c) + '.jpg'), 'JPEG') - for line_width in line_widths: - sketch_test = to_sketch(img_orig, sigma=line_width, k=5, gamma=0.96, epsilon=-1, phi=10e15, area_min=2) - sketch_test.save(os.path.join(img_path, 'pics_sketch', str(line_width), str(c) + '.jpg'), 'JPEG') - - c += 1 diff --git a/spaces/nomic-ai/OpenAssistant_oasst1/README.md b/spaces/nomic-ai/OpenAssistant_oasst1/README.md deleted file mode 100644 index a84a64ce39b81a05b4ca56434cee222904e9bcb3..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/OpenAssistant_oasst1/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: OpenAssistant/oasst1 -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false -duplicated_from: nomic-ai/fka_awesome-chatgpt-prompts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ntt123/Vietnam-female-voice-TTS/commons.py b/spaces/ntt123/Vietnam-female-voice-TTS/commons.py deleted file mode 100644 index 9455c8744836112b81bb5227006647fa46b36df8..0000000000000000000000000000000000000000 --- a/spaces/ntt123/Vietnam-female-voice-TTS/commons.py +++ /dev/null @@ -1,162 +0,0 @@ -import math - -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/nugrahatheo/Customer-Segmentation/clustering.py b/spaces/nugrahatheo/Customer-Segmentation/clustering.py deleted file mode 100644 index 4594e9998b4c04a0f954f8f274df562dcc5a241a..0000000000000000000000000000000000000000 --- a/spaces/nugrahatheo/Customer-Segmentation/clustering.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import pickle -import json - -# Load all files -with open('list_num_cols.txt', 'r') as file_1: - list_num_cols = json.load(file_1) - -with open('model_scaler.pkl', 'rb') as file_2: - scaler = pickle.load(file_2) - -with open('model_pca.pkl', 'rb') as file_3: - pca = pickle.load(file_3) - -with open('model_kmeans.pkl', 'rb') as file_4: - model_kmeans = pickle.load(file_4) - -def run(): - st.write('##### Form Clustering Customer Segmentation') - # Making Form - with st.form(key='Form Clustering Customer Segmentation'): - Customer_ID = st.number_input('Customer Id', min_value=1, max_value=10000, value=1) - Age = st.number_input('Age', min_value=20, max_value=60, value=20) - Edu = st.number_input('Edu', min_value=1, max_value=5, value=1) - Years_Employed = st.number_input('Years Employed', min_value=1, max_value=55, value=1) - Income = st.number_input('Income', min_value=0.0, max_value=9999.0, value=0.0) - Card_Debt = st.number_input('Card Debt', min_value=0.0, max_value=9999.0, value=0.0) - Other_Debt = st.number_input('Other Debt', min_value=0.0, max_value=9999.0, value=0.0) - Defaulted = st.radio("Defaulted",["0", "1"]) - DebtIncomeRatio = st.number_input('DebtIncomeRatio', min_value=0.0, max_value=100.0, value=0.0) - st.markdown('---') - - submited = st.form_submit_button('Process Now') - - data_inf = { - 'Customer_ID' : Customer_ID, - 'Age' : Age, - 'Edu' : Edu, - 'Years Employed' : Years_Employed, - 'Income' : Income, - 'Card Debt' : Card_Debt, - 'Other Debt' : Other_Debt, - 'Defaulted' : Defaulted, - 'DebtIncomeRatio' : DebtIncomeRatio - } - - data_inf = pd.DataFrame([data_inf]) - st.dataframe(data_inf) - - if submited: - # Split between numerical columns and categorical columns - data_inf_num = data_inf[list_num_cols] - # Feature scaling - data_inf_num_scaled = scaler.transform(data_inf_num) - data_inf_final = data_inf_num_scaled - # PCA - data_inf_num_pca = pca.transform(data_inf_final) - data_inf_final_pca = data_inf_num_pca - # Predict using K-Means Clustering Model - prediction_cluster = model_kmeans.predict(data_inf_final_pca) - st.write('# Nasabah ini cocok masuk ke dalam klaster : ', str(prediction_cluster[0])) - if prediction_cluster == 0: - st.write('Produk yang cocok untuk nasabah klaster 0 adalah sebagai berikut:') - st.write('1. **Tabungan** dan **Investasi**: Doronglah mereka untuk memanfaatkan pendapatan yang lebih tinggi seiring bertambahnya usia dengan menawarkan produk tabungan atau investasi yang membantu mereka menumbuhkan kekayaan.') - st.write('2. **Produk Reksa Dana**: Sarankan produk reksa dana yang sesuai dengan profil risiko mereka untuk membantu mereka berinvestasi dengan cara yang terdiversifikasi.') - st.write('3. **Kredit Suku Bunga Rendah**: Menawarkan produk kredit dengan suku bunga rendah atau kartu kredit dengan suku bunga yang kompetitif kepada konsumen untuk membantu mereka mengelola kebutuhan finansial sehari-hari.') - st.write('') - st.write('Penting untuk selalu memahami profil nasabah individu dan kebutuhan mereka secara lebih rinci sebelum menawarkan produk perbankan atau keuangan tertentu. Selain itu, pendekatan yang berfokus pada edukasi keuangan juga dapat membantu.') - else: - st.write('Produk yang cocok untuk nasabah klaster 1 adalah sebagai berikut:') - st.write('1. **Manajemen Hutang**: Menawarkan layanan manajemen utang dan konsultasi keuangan yang dapat membantu mereka mengelola utang dengan lebih baik.') - st.write('2. **Kredit Konsolidasi**: Merekomendasikan produk konsolidasi utang yang dapat membantu nasabah mengkonsolidasikan utang mereka dengan bunga yang lebih rendah, sehingga dapat mengurangi beban utang mereka.') - st.write('3. **Asuransi Perlindungan Pendapatan**: Menyediakan produk asuransi perlindungan pendapatan atau asuransi kredit yang dapat melindungi mereka dalam situasi darurat dan membantu mereka melunasi hutang mereka jika terjadi hal-hal yang tidak terduga.') - st.write('4. **Program Edukasi Keuangan**: Menyediakan program edukasi keuangan khusus untuk anggota Cluster 1 agar mereka dapat memahami cara mengelola utang dengan lebih baik dan menghindari penumpukan utang yang lebih besar.') - st.write('5. **Investasi Pemulihan Utang**: Menawarkan produk investasi yang dapat membantu mereka mendapatkan penghasilan tambahan untuk melunasi utang mereka lebih cepat.') - st.write('') - st.write('Penting untuk selalu memahami profil nasabah individu dan kebutuhan mereka secara lebih rinci sebelum menawarkan produk perbankan atau keuangan tertentu. Selain itu, pendekatan yang berfokus pada edukasi keuangan juga dapat membantu.') - -if __name__ == '__main__': - run() \ No newline at end of file diff --git a/spaces/odettecantswim/rvc-mlbb/README.md b/spaces/odettecantswim/rvc-mlbb/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/rvc-mlbb/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/trainer.py b/spaces/oguzakif/video-object-remover/FGT_codes/FGT/trainer.py deleted file mode 100644 index 08d0644b9e464a1fda954de38ebd524089a79c26..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/trainer.py +++ /dev/null @@ -1,200 +0,0 @@ -import math -import parse -import logging -from utils import util -from torch.utils.data.distributed import DistributedSampler -from torch.nn.parallel import DistributedDataParallel as DDP -from data import create_dataset, create_dataloader -from models.utils.loss import * -import yaml -from abc import abstractmethod, ABCMeta -from models.utils.flow_losses import AdversarialLoss - - -class Trainer(metaclass=ABCMeta): - def __init__(self, opt, rank): - self.opt = opt - self.rank = rank - - # make directory and set logger - if rank <= 0: - self.mkdir() - self.logger, self.tb_logger = self.setLogger() - self.setSeed() - self.dataInfo, self.valInfo, self.trainSet, self.trainSize, self.totalIterations, self.totalEpochs, self.trainLoader, self.trainSampler = self.prepareDataset() - self.model, self.dist, self.optimizer, self.dist_optim, self.scheduler, self.dist_scheduler = self.init_model() - self.flow_model = self.init_flow_model() - self.model = self.model.to(self.opt['device']) - self.dist = self.dist.to(self.opt['device']) - if opt['path'].get('gen_state', None): - self.startEpoch, self.currentStep = self.resume_training() - else: - self.startEpoch, self.currentStep = 0, 0 - if opt['distributed']: - self.model = DDP( - self.model, - device_ids=[self.opt['local_rank']], - output_device=self.opt['local_rank'], - find_unused_parameters=True - ) - self.dist = DDP( - self.dist, - device_ids=[self.opt['local_rank']], - output_device=self.opt['local_rank'], - find_unused_parameters=True - ) - if self.rank <= 0: - self.logger.info('Start training from epoch: {}, iter: {}'.format( - self.startEpoch, self.currentStep)) - self.best_psnr = 0 - self.valid_best_psnr = 0 - - self.maskedLoss = nn.L1Loss() - self.validLoss = nn.L1Loss() - self.adversarial_loss = AdversarialLoss(type='hinge') - self.adversarial_loss = self.adversarial_loss.to(self.opt['device']) - self.countDown = 0 - - # metrics recorder - self.total_loss = 0 - self.total_psnr = 0 - self.total_ssim = 0 - self.total_l1 = 0 - self.total_l2 = 0 - - def get_lr(self): - lr = [] - for param_group in self.optimizer.param_groups: - lr += [param_group['lr']] - for param_group in self.dist_optim.param_groups: - lr += [param_group['lr']] - return lr - - def adjust_learning_rate(self, optimizer, target_lr): - for param_group in optimizer.param_groups: - param_group['lr'] = target_lr - for param_group in self.dist_optim.param_groups: - param_group['lr'] = target_lr - - def mkdir(self): - new_name = util.mkdir_and_rename(self.opt['path']['OUTPUT_ROOT']) - if new_name: - self.opt['path']['TRAINING_STATE'] = os.path.join(new_name, 'training_state') - self.opt['path']['LOG'] = os.path.join(new_name, 'log') - self.opt['path']['VAL_IMAGES'] = os.path.join(new_name, 'val_images') - if not os.path.exists(self.opt['path']['TRAINING_STATE']): - os.makedirs(self.opt['path']['TRAINING_STATE']) - if not os.path.exists(self.opt['path']['LOG']): - os.makedirs(self.opt['path']['LOG']) - if not os.path.exists(self.opt['path']['VAL_IMAGES']): - os.makedirs(self.opt['path']['VAL_IMAGES']) - # save config file for output - with open(os.path.join(self.opt['path']['LOG'], 'config.yaml'), 'w') as f: - yaml.dump(self.opt, f) - - def setLogger(self): - util.setup_logger('base', self.opt['path']['LOG'], 'train_' + self.opt['name'], level=logging.INFO, - screen=True, tofile=True) - logger = logging.getLogger('base') - logger.info(parse.toString(self.opt)) - logger.info('OUTPUT DIR IS: {}'.format(self.opt['path']['OUTPUT_ROOT'])) - if self.opt['use_tb_logger']: - version = float(torch.__version__[0:3]) - if version >= 1.1: - from torch.utils.tensorboard import SummaryWriter - else: - logger.info('You are using PyTorch {}, Tensorboard will use [tensorboardX)'.format(version)) - from tensorboardX import SummaryWriter - tb_logger = SummaryWriter(os.path.join(self.opt['path']['OUTPUT_ROOT'], 'log')) - else: - tb_logger = None - return logger, tb_logger - - def setSeed(self): - seed = self.opt['train']['manual_seed'] - if self.rank <= 0: - self.logger.info('Random seed: {}'.format(seed)) - util.set_random_seed(seed) - torch.backends.cudnn.benchmark = True - if seed == 0: - torch.backends.cudnn.deterministic = True - - def prepareDataset(self): - dataInfo = self.opt['datasets']['dataInfo'] - valInfo = self.opt['datasets']['valInfo'] - valInfo['norm'] = self.opt['norm'] - if self.rank <= 0: - self.logger.debug('Val info is: {}'.format(valInfo)) - train_set, train_size, total_iterations, total_epochs = 0, 0, 0, 0 - train_loader, train_sampler = None, None - for phase, dataset in self.opt['datasets'].items(): - dataset['norm'] = self.opt['norm'] - dataset['dataMode'] = self.opt['dataMode'] - dataset['num_frames'] = self.opt['num_frames'] - dataset['sample'] = self.opt['sample'] - dataset['flow2rgb'] = self.opt['flow2rgb'] - dataset['flow_direction'] = self.opt['flow_direction'] - dataset['max_val'] = self.opt['max_val'] - dataset['input_resolution'] = self.opt['input_resolution'] - if phase.lower() == 'train': - train_set = create_dataset(dataset, dataInfo, phase, self.opt['datasetName_train']) - train_size = math.ceil( - len(train_set) / (dataset['batch_size'] * self.opt['world_size'])) - total_iterations = self.opt['train']['MAX_ITERS'] - total_epochs = int(math.ceil(total_iterations / train_size)) - if self.opt['distributed']: - train_sampler = DistributedSampler( - train_set, - num_replicas=self.opt['world_size'], - rank=self.opt['global_rank']) - else: - train_sampler = None - train_loader = create_dataloader(phase, train_set, dataset, self.opt, train_sampler) - if self.rank <= 0: - self.logger.info('Number of training batches: {}, iters: {}'.format(len(train_set), - total_iterations)) - self.logger.info('Total epoch needed: {} for iters {}'.format(total_epochs, total_iterations)) - assert train_set != 0 and train_size != 0, "Train size cannot be zero" - assert train_loader is not None, "Cannot find train set, val set can be None" - return dataInfo, valInfo, train_set, train_size, total_iterations, total_epochs, train_loader, train_sampler - - @abstractmethod - def init_model(self): - pass - - @abstractmethod - def init_flow_model(self): - pass - - @abstractmethod - def resume_training(self): - pass - - def train(self): - for epoch in range(self.startEpoch, self.totalEpochs + 1): - if self.opt['distributed']: - self.trainSampler.set_epoch(epoch) - self._trainEpoch(epoch) - if self.currentStep > self.totalIterations: - break - if self.opt['use_valid'] and (epoch + 1) % self.opt['train']['val_freq'] == 0: - self._validate(epoch) - self.scheduler.step(epoch) - self.dist_scheduler.step(epoch) - - @abstractmethod - def _trainEpoch(self, epoch): - pass - - @abstractmethod - def _printLog(self, logs, epoch, loss): - pass - - @abstractmethod - def save_checkpoint(self, epoch, is_best, metric, number): - pass - - @abstractmethod - def _validate(self, epoch): - pass - diff --git a/spaces/oliver2023/chatgpt-on-wechat/bridge/reply.py b/spaces/oliver2023/chatgpt-on-wechat/bridge/reply.py deleted file mode 100644 index c6bcd5465fe6187aac2d48935f2407e2dbf389d9..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/bridge/reply.py +++ /dev/null @@ -1,22 +0,0 @@ - -# encoding:utf-8 - -from enum import Enum - -class ReplyType(Enum): - TEXT = 1 # 文本 - VOICE = 2 # 音频文件 - IMAGE = 3 # 图片文件 - IMAGE_URL = 4 # 图片URL - - INFO = 9 - ERROR = 10 - def __str__(self): - return self.name - -class Reply: - def __init__(self, type : ReplyType = None , content = None): - self.type = type - self.content = content - def __str__(self): - return "Reply(type={}, content={})".format(self.type, self.content) \ No newline at end of file diff --git a/spaces/omlab/vlchecklist_demo/models/vilt/__init__.py b/spaces/omlab/vlchecklist_demo/models/vilt/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_tan\303\241r_en.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_tan\303\241r_en.html" deleted file mode 100644 index 5c4589cbf9c6b376121572bc0da02a8b5c69da7e..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_tan\303\241r_en.html" +++ /dev/null @@ -1,46 +0,0 @@ -
      0th instance:
      - -
      -
      -
      - -
      -
      - Source Saliency Heatmap -
      - x: Generated tokens, y: Attributed tokens -
      - - - -
      ▁He's▁a▁teacher.</s>
      ▁Ő0.4920.220.1620.1090.4120.131-0.576
      ▁tanár0.8660.7630.2660.4330.7690.7090.472
      .0.0860.590.0250.102-0.1040.6480.427
      </s>0.00.00.00.00.00.00.0
      -
      - -
      -
      -
      - -
      0th instance:
      - -
      -
      -
      - -
      -
      - Target Saliency Heatmap -
      - x: Generated tokens, y: Attributed tokens -
      - - - -
      ▁He's▁a▁teacher.</s>
      ▁He0.1470.4040.3070.3310.0630.045
      '0.860.2730.2330.0790.411
      s0.7880.1980.21-0.025
      ▁a-0.1610.0350.226
      ▁teacher0.0740.2
      .0.019
      </s>
      -
      - -
      -
      -
      - diff --git a/spaces/p1atdev/AdverseCleaner/README.md b/spaces/p1atdev/AdverseCleaner/README.md deleted file mode 100644 index 46487e326e7f8880a4992246566162c0ed2679d3..0000000000000000000000000000000000000000 --- a/spaces/p1atdev/AdverseCleaner/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AdverseCleaner -emoji: 🧹 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.20.1 -app_file: main.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md deleted file mode 100644 index b63519b41fe69347c4e696dafc9eda23531b78eb..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md +++ /dev/null @@ -1,30 +0,0 @@ - - -# DPMSolverMultistepInverse - -`DPMSolverMultistepInverse` is the inverted scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. - -The implementation is mostly based on the DDIM inversion definition of [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://huggingface.co/papers/2211.09794.pdf) and notebook implementation of the [`DiffEdit`] latent inversion from [Xiang-cd/DiffEdit-stable-diffusion](https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/diffedit.ipynb). - -## Tips - -Dynamic thresholding from Imagen (https://huggingface.co/papers/2205.11487) is supported, and for pixel-space -diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic -thresholding. This thresholding method is unsuitable for latent-space diffusion models such as -Stable Diffusion. - -## DPMSolverMultistepInverseScheduler -[[autodoc]] DPMSolverMultistepInverseScheduler - -## SchedulerOutput -[[autodoc]] schedulers.scheduling_utils.SchedulerOutput diff --git a/spaces/patent/demo3/README.md b/spaces/patent/demo3/README.md deleted file mode 100644 index 81857e28b833f82e710ee166a2a02af8183c9d52..0000000000000000000000000000000000000000 --- a/spaces/patent/demo3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo3 -emoji: 👁 -colorFrom: pink -colorTo: yellow -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pdjewell/sommeli_ai/app.py b/spaces/pdjewell/sommeli_ai/app.py deleted file mode 100644 index 7b0f267e315737d76e59b50ede57f05cb169ebf9..0000000000000000000000000000000000000000 --- a/spaces/pdjewell/sommeli_ai/app.py +++ /dev/null @@ -1,242 +0,0 @@ -import numpy as np -import pandas as pd -import os -from PIL import Image -import streamlit as st -from streamlit import components -from datasets import Dataset, load_dataset, load_from_disk -import faiss -from scripts.preprocessing import preprocess - -# App config -icon = Image.open('./images/wine_icon.png') -st.set_page_config(page_title="Sommeli-AI", - page_icon=icon, - layout="wide") -hide_default_format = """ - - """ -st.markdown(hide_default_format, unsafe_allow_html=True) - -# App functions -@st.cache_data -def read_data(ds_path=None): - - if ds_path is not None: - # Read in hf file - embeddings_dataset = load_from_disk(ds_path) - else: - embeddings_dataset = load_dataset("pdjewell/sommeli_ai", split="train") - - # Convert to pandas df - embeddings_dataset.set_format("pandas") - df = embeddings_dataset[:] - - # preprocess data (add type col, remove dups) - df = preprocess(df) - - return df - - -def get_neighbours(df, query_embedding, k=6, - metric='inner'): - - # convert from pandas df to hf ds - ds = Dataset.from_pandas(df) - ds.reset_format() - ds = ds.with_format("np") - - # add faiss index - if metric == 'inner': - ds.add_faiss_index(column="embeddings", - metric_type=faiss.METRIC_INNER_PRODUCT) - else: - ds.add_faiss_index(column="embeddings", - metric_type=faiss.METRIC_L2) - - scores, samples = ds.get_nearest_examples( - "embeddings", query_embedding, k=k) - - samples.pop('embeddings') - samples.pop('__index_level_0__') - - return scores, samples - - -def filter_df_search(df: pd.DataFrame) -> pd.DataFrame: - - modify_search = st.checkbox("🔍 Further filter search selection") - - if not modify_search: - return df - - df = df.copy() - - modification_container_search = st.container() - - with modification_container_search: - to_filter_columns = st.multiselect("Filter on:", - ['Province', 'Region', 'Winery','Score', 'Price'], - key='search') - - for column in to_filter_columns: - if column in ['Score', 'Price']: # Use slider for 'points' and 'price' - min_val = 0 - max_val = int(df[column].max()) - user_input = st.slider(f"Values for {column}", min_val, max_val, (min_val, max_val)) - df = df[(df[column] >= user_input[0]) & (df[column] <= user_input[1])] - elif column in ['Country', 'Province', 'Region', 'Variety', 'Winery']: # Use multiselect for these columns - unique_values = df[column].dropna().unique() - default_values = [unique_values[0]] if len(unique_values) > 0 else [] # Select only the first unique value if it exists - user_input = st.multiselect(f"Values for {column}", unique_values, default_values) - df = df[df[column].isin(user_input)] - - return df - - -def filter_df_recs(df: pd.DataFrame) -> pd.DataFrame: - - modify_recs = st.checkbox("🔍 Filter recommendation results") - - if not modify_recs: - return df - - df = df.copy() - - modification_container_recs = st.container() - - with modification_container_recs: - - to_filter_columns2 = st.multiselect("Filter on:", - ['Country','Province', 'Region', 'Variety', 'Winery', - 'Score', 'Price'], - key='recs') - - for column in to_filter_columns2: - if column in ['Score', 'Price']: # Use slider for 'points' and 'price' - min_val = 0 - max_val = int(df[column].max()) - user_input = st.slider(f"Values for {column}", min_val, max_val, (min_val, max_val)) - df = df[(df[column] >= user_input[0]) & (df[column] <= user_input[1])] - elif column in ['Country', 'Province', 'Region', 'Variety', 'Winery']: # Use multiselect for these columns - unique_values = df[column].dropna().unique() - default_values = [unique_values[0]] if len(unique_values) > 0 else [] # Select only the first unique value if it exists - user_input = st.multiselect(f"Values for {column}", unique_values, default_values) - df = df[df[column].isin(user_input)] - - return df - - -if __name__ == "__main__": - st.title("🍷 Sommeli-AI") - - # Read in data - ds_path = "./data/wine_ds.hf" - df = read_data(ds_path=None) - - maincol, acol = st.columns([0.999,0.001]) - with maincol: - col1, col2 = st.columns([0.65,0.35], gap="medium") - with col2: - st.header("Explore the world of wine 🌍") - wine_plot = st.radio('Select plot type:', ['2D','3D'], - label_visibility = "hidden", - horizontal=True) - st.text("Click the legend categories to filter") - - # Load the HTML file - with open('./images/px_2d.html', 'r') as file: - plot2d_html = file.read() - # Load the HTML file - with open('./images/px_3d.html', 'r') as file: - plot3d_html = file.read() - # Display the HTML plot in the Streamlit app - if wine_plot == '2D': - components.v1.html(plot2d_html, width=512, height=512) - elif wine_plot == '3D': - components.v1.html(plot3d_html, width=512, height=512) - - with col1: - - # Select all wine types initially - st.header("Search for similar wines 🥂") - # Select wine type: default is all - wine_types = df['Type'].unique() - selected_wine_types = st.multiselect("Select category 👇", wine_types, default=wine_types) - df = df[df['Type'].isin(selected_wine_types)] - #subcol1, subcol2 = st.columns([0.5,0.5], gap="small") - #with subcol1: - # Select wine variety: default is all - wine_vars = df['Variety'].unique() - selected_wine_vars = st.multiselect("Narrow down the variety 🍇",['Select all'] + list(wine_vars), - default = 'Select all') - if "Select all" in selected_wine_vars: - df_search = df - else: - df_search = df[df['Variety'].isin(selected_wine_vars)] - - #with subcol2: - # Select the country: default is all - countries = df_search['Country'].unique() - selected_countries = st.multiselect("Narrow down the country 🌎",['Select all'] + list(countries), - default = 'Select all') - if "Select all" in selected_countries: - df_search = df_search - else: - df_search = df_search[df_search['Country'].isin(selected_countries)] - - # Add additional filters - df_search = filter_df_search(df_search) - - # Create a search bar for the wine 'title' - selected_wine = st.selectbox("Search for and select a wine 👇", [''] + list(df_search["Title"].unique())) - - if selected_wine: - # Get the embedding for selected_wine - query_embedding = df.loc[df['Title']==selected_wine, 'embeddings'].iloc[0] - - tasting_notes = df.loc[df['Title']==selected_wine, 'Tasting notes'].iloc[0] - st.write(f"Tasting notes: {tasting_notes}") - - # CSS to inject contained in a string - hide_table_row_index = """ - - """ - # Inject CSS with Markdown - st.markdown(hide_table_row_index, unsafe_allow_html=True) - # Display selected wine - st.header("Your selected wine 🍷") - selected_cols = ['Title','Country','Province','Region','Winery', - 'Variety','Tasting notes','Score'] - st.table(df.loc[df['Title']==selected_wine, selected_cols].fillna("")) - # Slider for results to show - k = st.slider(f"Choose how many similar wines to show 👇", 1, 10, value=4) - - # Filter recommendation results - df_results = filter_df_recs(df) - - else: - print("Awaiting selection") - - if selected_wine: - # Display results as table - if st.button("🔘 Press me to generate similar tasting wines"): - # Get neighbours - scores, samples = get_neighbours(df_results, query_embedding, - k=k+1, metric='l2') - recs_df = pd.DataFrame(samples).fillna("") - recs_df = recs_df.fillna(" ") - # Display results - st.header(f"Top {k} similar tasting wines 🍾") - st.table(recs_df.loc[1:,selected_cols]) - - else: - print("Awaiting selection") - - diff --git a/spaces/perilli/tortoise-tts-v2/models/autoregressive.py b/spaces/perilli/tortoise-tts-v2/models/autoregressive.py deleted file mode 100644 index 6a91748d01ce35672554a8f39a0ca82fb562846b..0000000000000000000000000000000000000000 --- a/spaces/perilli/tortoise-tts-v2/models/autoregressive.py +++ /dev/null @@ -1,577 +0,0 @@ -import functools - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList -from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions -from transformers.utils.model_parallel_utils import get_device_map, assert_device_map -from models.arch_util import AttentionBlock -from utils.typical_sampling import TypicalLogitsWarper - - -def null_position_embeddings(range, dim): - return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device) - - -class ResBlock(nn.Module): - """ - Basic residual convolutional block that uses GroupNorm. - """ - def __init__(self, chan): - super().__init__() - self.net = nn.Sequential( - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan), - nn.ReLU(), - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan) - ) - - def forward(self, x): - return F.relu(self.net(x) + x) - - -class GPT2InferenceModel(GPT2PreTrainedModel): - def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear): - super().__init__(config) - self.transformer = gpt - self.text_pos_embedding = text_pos_emb - self.embeddings = embeddings - self.lm_head = nn.Sequential(norm, linear) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.cached_mel_emb = None - - def parallelize(self, device_map=None): - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - def deparallelize(self): - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def store_mel_emb(self, mel_emb): - self.cached_mel_emb = mel_emb - - def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs): - - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - return { - "input_ids": input_ids, - "past_key_values": past, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - def forward( - self, - input_ids=None, - past_key_values=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - assert self.cached_mel_emb is not None - assert inputs_embeds is None # Not supported by this inference model. - assert labels is None # Training not supported by this inference model. - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # Create embedding - mel_len = self.cached_mel_emb.shape[1] - if input_ids.shape[1] != 1: - text_inputs = input_ids[:, mel_len:] - text_emb = self.embeddings(text_inputs) - text_emb = text_emb + self.text_pos_embedding(text_emb) - if self.cached_mel_emb.shape[0] != text_emb.shape[0]: - mel_emb = self.cached_mel_emb.repeat_interleave(text_emb.shape[0]//self.cached_mel_emb.shape[0], 0) - else: - mel_emb = self.cached_mel_emb - emb = torch.cat([mel_emb, text_emb], dim=1) - else: - emb = self.embeddings(input_ids) - emb = emb + self.text_pos_embedding.get_fixed_embedding(attention_mask.shape[1]-mel_len, attention_mask.device) - - transformer_outputs = self.transformer( - inputs_embeds=emb, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + transformer_outputs[1:] - - return CausalLMOutputWithCrossAttentions( - loss=None, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache(past, beam_idx): - """ - This function is used to re-order the :obj:`past_key_values` cache if - :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is - called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past - ) - - -class ConditioningEncoder(nn.Module): - def __init__(self, - spec_dim, - embedding_dim, - attn_blocks=6, - num_attn_heads=4, - do_checkpointing=False, - mean=False): - super().__init__() - attn = [] - self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1) - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - self.do_checkpointing = do_checkpointing - self.mean = mean - - def forward(self, x): - h = self.init(x) - h = self.attn(h) - if self.mean: - return h.mean(dim=2) - else: - return h[:, :, 0] - - -class LearnedPositionEmbeddings(nn.Module): - def __init__(self, seq_len, model_dim, init=.02): - super().__init__() - self.emb = nn.Embedding(seq_len, model_dim) - # Initializing this way is standard for GPT-2 - self.emb.weight.data.normal_(mean=0.0, std=init) - - def forward(self, x): - sl = x.shape[1] - return self.emb(torch.arange(0, sl, device=x.device)) - - def get_fixed_embedding(self, ind, dev): - return self.emb(torch.tensor([ind], device=dev)).unsqueeze(0) - - -def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing): - """ - GPT-2 implemented by the HuggingFace library. - """ - from transformers import GPT2Config, GPT2Model - gpt_config = GPT2Config(vocab_size=256, # Unused. - n_positions=max_mel_seq_len+max_text_seq_len, - n_ctx=max_mel_seq_len+max_text_seq_len, - n_embd=model_dim, - n_layer=layers, - n_head=heads, - gradient_checkpointing=checkpointing, - use_cache=not checkpointing) - gpt = GPT2Model(gpt_config) - # Override the built in positional embeddings - del gpt.wpe - gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim) - # Built-in token embeddings are unused. - del gpt.wte - return gpt, LearnedPositionEmbeddings(max_mel_seq_len, model_dim), LearnedPositionEmbeddings(max_text_seq_len, model_dim),\ - None, None - - -class MelEncoder(nn.Module): - def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2): - super().__init__() - self.channels = channels - self.encoder = nn.Sequential(nn.Conv1d(mel_channels, channels//4, kernel_size=3, padding=1), - nn.Sequential(*[ResBlock(channels//4) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//4, channels//2, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//16, channels//2), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels//2) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//2, channels, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//8, channels), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]), - ) - self.reduction = 4 - - - def forward(self, x): - for e in self.encoder: - x = e(x) - return x.permute(0,2,1) - - -class UnifiedVoice(nn.Module): - def __init__(self, layers=8, model_dim=512, heads=8, max_text_tokens=120, max_mel_tokens=250, max_conditioning_inputs=1, - mel_length_compression=1024, number_text_tokens=256, - start_text_token=None, number_mel_codes=8194, start_mel_token=8192, - stop_mel_token=8193, train_solo_embeddings=False, use_mel_codes_as_input=True, - checkpointing=True, average_conditioning_embeddings=False, - types=1): - """ - Args: - layers: Number of layers in transformer stack. - model_dim: Operating dimensions of the transformer - heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64 - max_text_tokens: Maximum number of text tokens that will be encountered by model. - max_mel_tokens: Maximum number of MEL tokens that will be encountered by model. - max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s). - mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length. - number_text_tokens: - start_text_token: - stop_text_token: - number_mel_codes: - start_mel_token: - stop_mel_token: - train_solo_embeddings: - use_mel_codes_as_input: - checkpointing: - average_conditioning_embeddings: Whether or not conditioning embeddings should be averaged, instead of fed piecewise into the model. - """ - super().__init__() - - self.number_text_tokens = number_text_tokens - self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token - self.stop_text_token = 0 - self.number_mel_codes = number_mel_codes - self.start_mel_token = start_mel_token - self.stop_mel_token = stop_mel_token - self.layers = layers - self.heads = heads - self.max_mel_tokens = max_mel_tokens - self.max_text_tokens = max_text_tokens - self.model_dim = model_dim - self.max_conditioning_inputs = max_conditioning_inputs - self.mel_length_compression = mel_length_compression - self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads) - self.average_conditioning_embeddings = average_conditioning_embeddings - self.text_embedding = nn.Embedding(self.number_text_tokens*types+1, model_dim) - if use_mel_codes_as_input: - self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim) - else: - self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1) - self.gpt, self.mel_pos_embedding, self.text_pos_embedding, self.mel_layer_pos_embedding, self.text_layer_pos_embedding = \ - build_hf_gpt_transformer(layers, model_dim, heads, self.max_mel_tokens+2+self.max_conditioning_inputs, self.max_text_tokens+2, checkpointing) - if train_solo_embeddings: - self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - else: - self.mel_solo_embedding = 0 - self.text_solo_embedding = 0 - - self.final_norm = nn.LayerNorm(model_dim) - self.text_head = nn.Linear(model_dim, self.number_text_tokens*types+1) - self.mel_head = nn.Linear(model_dim, self.number_mel_codes) - - # Initialize the embeddings per the GPT-2 scheme - embeddings = [self.text_embedding] - if use_mel_codes_as_input: - embeddings.append(self.mel_embedding) - for module in embeddings: - module.weight.data.normal_(mean=0.0, std=.02) - - def build_aligned_inputs_and_targets(self, input, start_token, stop_token): - inp = F.pad(input, (1,0), value=start_token) - tar = F.pad(input, (0,1), value=stop_token) - return inp, tar - - def set_mel_padding(self, mel_input_tokens, wav_lengths): - """ - Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in - that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required - preformatting to create a working TTS model. - """ - # Set padding areas within MEL (currently it is coded with the MEL code for ). - mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode='trunc') - for b in range(len(mel_lengths)): - actual_end = mel_lengths[b] + 1 # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token. - if actual_end < mel_input_tokens.shape[-1]: - mel_input_tokens[b, actual_end:] = self.stop_mel_token - return mel_input_tokens - - def get_logits(self, speech_conditioning_inputs, first_inputs, first_head, second_inputs=None, second_head=None, get_attns=False, return_latent=False): - if second_inputs is not None: - emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1) - else: - emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1) - - gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns) - if get_attns: - return gpt_out.attentions - - enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input - enc = self.final_norm(enc) - - if return_latent: - return enc[:, speech_conditioning_inputs.shape[1]:speech_conditioning_inputs.shape[1]+first_inputs.shape[1]], enc[:, -second_inputs.shape[1]:] - - first_logits = enc[:, :first_inputs.shape[1]] - first_logits = first_head(first_logits) - first_logits = first_logits.permute(0,2,1) - if second_inputs is not None: - second_logits = enc[:, -second_inputs.shape[1]:] - second_logits = second_head(second_logits) - second_logits = second_logits.permute(0,2,1) - return first_logits, second_logits - else: - return first_logits - - def forward(self, speech_conditioning_input, text_inputs, text_lengths, mel_codes, wav_lengths, types=None, text_first=True, raw_mels=None, return_attentions=False, - return_latent=False, clip_inputs=True): - """ - Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode - (actuated by `text_first`). - - speech_conditioning_input: MEL float tensor, (b,80,s) - text_inputs: long tensor, (b,t) - text_lengths: long tensor, (b,) - mel_inputs: long tensor, (b,m) - wav_lengths: long tensor, (b,) - raw_mels: MEL float tensor (b,80,s) - - If return_attentions is specified, only logits are returned. - If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned. - If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality. - """ - # Types are expressed by expanding the text embedding space. - if types is not None: - text_inputs = text_inputs * (1+types).unsqueeze(-1) - - if clip_inputs: - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_text_len = text_lengths.max() - text_inputs = text_inputs[:, :max_text_len] - max_mel_len = wav_lengths.max() // self.mel_length_compression - mel_codes = mel_codes[:, :max_mel_len] - if raw_mels is not None: - raw_mels = raw_mels[:, :, :max_mel_len*4] - mel_codes = self.set_mel_padding(mel_codes, wav_lengths) - text_inputs = F.pad(text_inputs, (0,1), value=self.stop_text_token) - mel_codes = F.pad(mel_codes, (0,1), value=self.stop_mel_token) - - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len(speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - if self.average_conditioning_embeddings: - conds = conds.mean(dim=1).unsqueeze(1) - - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - mel_codes, mel_targets = self.build_aligned_inputs_and_targets(mel_codes, self.start_mel_token, self.stop_mel_token) - if raw_mels is not None: - mel_inp = F.pad(raw_mels, (0, 8)) - else: - mel_inp = mel_codes - mel_emb = self.mel_embedding(mel_inp) - mel_emb = mel_emb + self.mel_pos_embedding(mel_codes) - - if text_first: - text_logits, mel_logits = self.get_logits(conds, text_emb, self.text_head, mel_emb, self.mel_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return mel_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - else: - mel_logits, text_logits = self.get_logits(conds, mel_emb, self.mel_head, text_emb, self.text_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return text_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - - if return_attentions: - return mel_logits - loss_text = F.cross_entropy(text_logits, text_targets.long()) - loss_mel = F.cross_entropy(mel_logits, mel_targets.long()) - return loss_text.mean(), loss_mel.mean(), mel_logits - - def text_forward(self, speech_conditioning_input, text_inputs, text_lengths): - """ - Performs autoregressive modeling on only text. Still requires a speech_conditioning_input due to the way the - model inputs are formatted. Just provide any audio clip (arguably, zeros could be provided). - """ - assert self.max_text_tokens >= text_inputs.shape[1], f'{text_inputs.shape[1]}' - - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_text_len = text_lengths.max() - text_inputs = F.pad(text_inputs[:, :max_text_len], (0,1), value=self.stop_text_token) - - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len(speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - if self.average_conditioning_embeddings: - conds = conds.mean(dim=1).unsqueeze(1) - - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) + self.text_solo_embedding - text_logits = self.get_logits(conds, text_emb, self.text_head) - loss_text = F.cross_entropy(text_logits, text_targets.long()) - return loss_text.mean() - - def speech_forward(self, speech_conditioning_input, mel_codes, wav_lengths, raw_mels=None): - """ - Performs autoregressive modeling on only speech data. - """ - assert self.max_mel_tokens >= mel_codes.shape[1], f'{mel_codes.shape[1]}' - - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_mel_len = wav_lengths.max() // self.mel_length_compression - mel_codes = F.pad(mel_codes[:, :max_mel_len], (0,1), value=self.stop_mel_token) - mel_codes = self.set_mel_padding(mel_codes, wav_lengths) - if raw_mels is not None: - raw_mels = raw_mels[:, :, :max_mel_len*4] - - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len(speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - if self.average_conditioning_embeddings: - conds = conds.mean(dim=1).unsqueeze(1) - - mel_codes, mel_targets = self.build_aligned_inputs_and_targets(mel_codes, self.start_mel_token, self.stop_mel_token) - if raw_mels is not None: - mel_inp = F.pad(raw_mels, (0, 4)) - else: - mel_inp = mel_codes - mel_emb = self.mel_embedding(mel_inp) - mel_emb = mel_emb + self.mel_pos_embedding(mel_codes) + self.mel_solo_embedding - mel_logits = self.get_logits(conds, mel_emb, self.mel_head) - loss_mel = F.cross_entropy(mel_logits, mel_targets.long()) - return loss_mel.mean() - - def inference_speech(self, speech_conditioning_input, text_inputs, input_tokens=None, num_return_sequences=1, - max_generate_length=None, typical_sampling=False, typical_mass=.9, **hf_generate_kwargs): - seq_length = self.max_mel_tokens + self.max_text_tokens + 2 - if not hasattr(self, 'inference_model'): - # TODO: Decouple gpt_config from this inference model. - gpt_config = GPT2Config(vocab_size=self.max_mel_tokens, - n_positions=seq_length, - n_ctx=seq_length, - n_embd=self.model_dim, - n_layer=self.layers, - n_head=self.heads, - gradient_checkpointing=False, - use_cache=True) - self.inference_model = GPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head) - self.gpt.wte = self.mel_embedding - - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len(speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - if self.average_conditioning_embeddings: - conds = conds.mean(dim=1).unsqueeze(1) - - emb = torch.cat([conds, text_emb], dim=1) - self.inference_model.store_mel_emb(emb) - - fake_inputs = torch.full((emb.shape[0], conds.shape[1] + emb.shape[1],), fill_value=1, dtype=torch.long, - device=text_inputs.device) - fake_inputs[:, -1] = self.start_mel_token - trunc_index = fake_inputs.shape[1] - if input_tokens is None: - inputs = fake_inputs - else: - assert num_return_sequences % input_tokens.shape[0] == 0, "The number of return sequences must be divisible by the number of input sequences" - fake_inputs = fake_inputs.repeat(num_return_sequences, 1) - input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1) - inputs = torch.cat([fake_inputs, input_tokens], dim=1) - - logits_processor = LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList() - max_length = trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length - gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token, - max_length=max_length, logits_processor=logits_processor, - num_return_sequences=num_return_sequences, **hf_generate_kwargs) - return gen[:, trunc_index:] - - -if __name__ == '__main__': - gpt = UnifiedVoice(model_dim=256, heads=4, train_solo_embeddings=True, use_mel_codes_as_input=True, max_conditioning_inputs=4) - l = gpt(torch.randn(2, 3, 80, 800), - torch.randint(high=120, size=(2,120)), - torch.tensor([32, 120]), - torch.randint(high=8192, size=(2,250)), - torch.tensor([250*256,195*256])) - gpt.text_forward(torch.randn(2,80,800), torch.randint(high=50, size=(2,80)), torch.tensor([32, 80])) diff --git a/spaces/perilli/tortoise-tts-v2/tortoise/utils/diffusion.py b/spaces/perilli/tortoise-tts-v2/tortoise/utils/diffusion.py deleted file mode 100644 index e877ff22de75c407f067ff2a6280e912eebf7a84..0000000000000000000000000000000000000000 --- a/spaces/perilli/tortoise-tts-v2/tortoise/utils/diffusion.py +++ /dev/null @@ -1,1250 +0,0 @@ -""" -This is an almost carbon copy of gaussian_diffusion.py from OpenAI's ImprovedDiffusion repo, which itself: - -This code started out as a PyTorch port of Ho et al's diffusion models: -https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py - -Docstrings have been added, as well as DDIM sampling and a new collection of beta schedules. -""" - -import enum -import math - -import numpy as np -import torch -import torch as th -from tqdm import tqdm - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - Compute the KL divergence between two gaussians. - - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, th.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for th.exp(). - logvar1, logvar2 = [ - x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + th.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * th.exp(-logvar2) - ) - - -def approx_standard_normal_cdf(x): - """ - A fast approximation of the cumulative distribution function of the - standard normal. - """ - return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3)))) - - -def discretized_gaussian_log_likelihood(x, *, means, log_scales): - """ - Compute the log-likelihood of a Gaussian distribution discretizing to a - given image. - - :param x: the target images. It is assumed that this was uint8 values, - rescaled to the range [-1, 1]. - :param means: the Gaussian mean Tensor. - :param log_scales: the Gaussian log stddev Tensor. - :return: a tensor like x of log probabilities (in nats). - """ - assert x.shape == means.shape == log_scales.shape - centered_x = x - means - inv_stdv = th.exp(-log_scales) - plus_in = inv_stdv * (centered_x + 1.0 / 255.0) - cdf_plus = approx_standard_normal_cdf(plus_in) - min_in = inv_stdv * (centered_x - 1.0 / 255.0) - cdf_min = approx_standard_normal_cdf(min_in) - log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12)) - log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12)) - cdf_delta = cdf_plus - cdf_min - log_probs = th.where( - x < -0.999, - log_cdf_plus, - th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))), - ) - assert log_probs.shape == x.shape - return log_probs - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - beta_start = scale * 0.0001 - beta_end = scale * 0.02 - return np.linspace( - beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64 - ) - elif schedule_name == "cosine": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -class ModelMeanType(enum.Enum): - """ - Which type of output the model predicts. - """ - - PREVIOUS_X = 'previous_x' # the model predicts x_{t-1} - START_X = 'start_x' # the model predicts x_0 - EPSILON = 'epsilon' # the model predicts epsilon - - -class ModelVarType(enum.Enum): - """ - What is used as the model's output variance. - - The LEARNED_RANGE option has been added to allow the model to predict - values between FIXED_SMALL and FIXED_LARGE, making its job easier. - """ - - LEARNED = 'learned' - FIXED_SMALL = 'fixed_small' - FIXED_LARGE = 'fixed_large' - LEARNED_RANGE = 'learned_range' - - -class LossType(enum.Enum): - MSE = 'mse' # use raw MSE loss (and KL when learning variances) - RESCALED_MSE = 'rescaled_mse' # use raw MSE loss (with RESCALED_KL when learning variances) - KL = 'kl' # use the variational lower-bound - RESCALED_KL = 'rescaled_kl' # like KL, but rescale to estimate the full VLB - - def is_vb(self): - return self == LossType.KL or self == LossType.RESCALED_KL - - -class GaussianDiffusion: - """ - Utilities for training and sampling diffusion models. - - Ported directly from here, and then adapted over time to further experimentation. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 - - :param betas: a 1-D numpy array of betas for each diffusion timestep, - starting at T and going to 1. - :param model_mean_type: a ModelMeanType determining what the model outputs. - :param model_var_type: a ModelVarType determining how variance is output. - :param loss_type: a LossType determining the loss function to use. - :param rescale_timesteps: if True, pass floating point timesteps into the - model so that they are always scaled like in the - original paper (0 to 1000). - """ - - def __init__( - self, - *, - betas, - model_mean_type, - model_var_type, - loss_type, - rescale_timesteps=False, - conditioning_free=False, - conditioning_free_k=1, - ramp_conditioning_free=True, - ): - self.model_mean_type = ModelMeanType(model_mean_type) - self.model_var_type = ModelVarType(model_var_type) - self.loss_type = LossType(loss_type) - self.rescale_timesteps = rescale_timesteps - self.conditioning_free = conditioning_free - self.conditioning_free_k = conditioning_free_k - self.ramp_conditioning_free = ramp_conditioning_free - - # Use float64 for accuracy. - betas = np.array(betas, dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - # log calculation clipped because the posterior variance is 0 at the - # beginning of the diffusion chain. - self.posterior_log_variance_clipped = np.log( - np.append(self.posterior_variance[1], self.posterior_variance[1:]) - ) - self.posterior_mean_coef1 = ( - betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - self.posterior_mean_coef2 = ( - (1.0 - self.alphas_cumprod_prev) - * np.sqrt(alphas) - / (1.0 - self.alphas_cumprod) - ) - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - ) - variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract_into_tensor( - self.log_one_minus_alphas_cumprod, t, x_start.shape - ) - return mean, variance, log_variance - - def q_sample(self, x_start, t, noise=None): - """ - Diffuse the data for a given number of diffusion steps. - - In other words, sample from q(x_t | x_0). - - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - if noise is None: - noise = th.randn_like(x_start) - assert noise.shape == x_start.shape - return ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) - * noise - ) - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - - q(x_{t-1} | x_t, x_0) - - """ - assert x_start.shape == x_t.shape - posterior_mean = ( - _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x_t.shape - ) - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance( - self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None - ): - """ - Apply the model to get p(x_{t-1} | x_t), as well as a prediction of - the initial x, x_0. - - :param model: the model, which takes a signal and a batch of timesteps - as input. - :param x: the [N x C x ...] tensor at time t. - :param t: a 1-D Tensor of timesteps. - :param clip_denoised: if True, clip the denoised signal into [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. Applies before - clip_denoised. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict with the following keys: - - 'mean': the model mean output. - - 'variance': the model variance output. - - 'log_variance': the log of 'variance'. - - 'pred_xstart': the prediction for x_0. - """ - if model_kwargs is None: - model_kwargs = {} - - B, C = x.shape[:2] - assert t.shape == (B,) - model_output = model(x, self._scale_timesteps(t), **model_kwargs) - if self.conditioning_free: - model_output_no_conditioning = model(x, self._scale_timesteps(t), conditioning_free=True, **model_kwargs) - - if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]: - assert model_output.shape == (B, C * 2, *x.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - if self.conditioning_free: - model_output_no_conditioning, _ = th.split(model_output_no_conditioning, C, dim=1) - if self.model_var_type == ModelVarType.LEARNED: - model_log_variance = model_var_values - model_variance = th.exp(model_log_variance) - else: - min_log = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x.shape - ) - max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) - # The model_var_values is [-1, 1] for [min_var, max_var]. - frac = (model_var_values + 1) / 2 - model_log_variance = frac * max_log + (1 - frac) * min_log - model_variance = th.exp(model_log_variance) - else: - model_variance, model_log_variance = { - # for fixedlarge, we set the initial (log-)variance like so - # to get a better decoder log likelihood. - ModelVarType.FIXED_LARGE: ( - np.append(self.posterior_variance[1], self.betas[1:]), - np.log(np.append(self.posterior_variance[1], self.betas[1:])), - ), - ModelVarType.FIXED_SMALL: ( - self.posterior_variance, - self.posterior_log_variance_clipped, - ), - }[self.model_var_type] - model_variance = _extract_into_tensor(model_variance, t, x.shape) - model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape) - - if self.conditioning_free: - if self.ramp_conditioning_free: - assert t.shape[0] == 1 # This should only be used in inference. - cfk = self.conditioning_free_k * (1 - self._scale_timesteps(t)[0].item() / self.num_timesteps) - else: - cfk = self.conditioning_free_k - model_output = (1 + cfk) * model_output - cfk * model_output_no_conditioning - - def process_xstart(x): - if denoised_fn is not None: - x = denoised_fn(x) - if clip_denoised: - return x.clamp(-1, 1) - return x - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - pred_xstart = process_xstart( - self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output) - ) - model_mean = model_output - elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]: - if self.model_mean_type == ModelMeanType.START_X: - pred_xstart = process_xstart(model_output) - else: - pred_xstart = process_xstart( - self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output) - ) - model_mean, _, _ = self.q_posterior_mean_variance( - x_start=pred_xstart, x_t=x, t=t - ) - else: - raise NotImplementedError(self.model_mean_type) - - assert ( - model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape - ) - return { - "mean": model_mean, - "variance": model_variance, - "log_variance": model_log_variance, - "pred_xstart": pred_xstart, - } - - def _predict_xstart_from_eps(self, x_t, t, eps): - assert x_t.shape == eps.shape - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps - ) - - def _predict_xstart_from_xprev(self, x_t, t, xprev): - assert x_t.shape == xprev.shape - return ( # (xprev - coef2*x_t) / coef1 - _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev - - _extract_into_tensor( - self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape - ) - * x_t - ) - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - pred_xstart - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _scale_timesteps(self, t): - if self.rescale_timesteps: - return t.float() * (1000.0 / self.num_timesteps) - return t - - def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute the mean for the previous step, given a function cond_fn that - computes the gradient of a conditional log probability with respect to - x. In particular, cond_fn computes grad(log(p(y|x))), and we want to - condition on y. - - This uses the conditioning strategy from Sohl-Dickstein et al. (2015). - """ - gradient = cond_fn(x, self._scale_timesteps(t), **model_kwargs) - new_mean = ( - p_mean_var["mean"].float() + p_mean_var["variance"] * gradient.float() - ) - return new_mean - - def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute what the p_mean_variance output would have been, should the - model's score function be conditioned by cond_fn. - - See condition_mean() for details on cond_fn. - - Unlike condition_mean(), this instead uses the conditioning strategy - from Song et al (2020). - """ - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - - eps = self._predict_eps_from_xstart(x, t, p_mean_var["pred_xstart"]) - eps = eps - (1 - alpha_bar).sqrt() * cond_fn( - x, self._scale_timesteps(t), **model_kwargs - ) - - out = p_mean_var.copy() - out["pred_xstart"] = self._predict_xstart_from_eps(x, t, eps) - out["mean"], _, _ = self.q_posterior_mean_variance( - x_start=out["pred_xstart"], x_t=x, t=t - ) - return out - - def p_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - ): - """ - Sample x_{t-1} from the model at the given timestep. - - :param model: the model to sample from. - :param x: the current tensor at x_{t-1}. - :param t: the value of t, starting at 0 for the first diffusion step. - :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict containing the following keys: - - 'sample': a random sample from the model. - - 'pred_xstart': a prediction of x_0. - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - noise = th.randn_like(x) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - if cond_fn is not None: - out["mean"] = self.condition_mean( - cond_fn, out, x, t, model_kwargs=model_kwargs - ) - sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def p_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model. - - :param model: the model module. - :param shape: the shape of the samples, (N, C, H, W). - :param noise: if specified, the noise from the encoder to sample. - Should be of the same shape as `shape`. - :param clip_denoised: if True, clip x_start predictions to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param device: if specified, the device to create the samples on. - If not specified, use a model parameter's device. - :param progress: if True, show a tqdm progress bar. - :return: a non-differentiable batch of samples. - """ - final = None - for sample in self.p_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - ): - final = sample - return final["sample"] - - def p_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model and yield intermediate samples from - each timestep of diffusion. - - Arguments are the same as p_sample_loop(). - Returns a generator over dicts, where each dict is the return value of - p_sample(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - for i in tqdm(indices, disable=not progress): - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.p_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - ) - yield out - img = out["sample"] - - def ddim_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t-1} from the model using DDIM. - - Same usage as p_sample(). - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) - - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) - sigma = ( - eta - * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) - * th.sqrt(1 - alpha_bar / alpha_bar_prev) - ) - # Equation 12. - noise = th.randn_like(x) - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_prev) - + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps - ) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - sample = mean_pred + nonzero_mask * sigma * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def ddim_reverse_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t+1} from the model using DDIM reverse ODE. - """ - assert eta == 0.0, "Reverse ODE only for deterministic path" - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - - out["pred_xstart"] - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) - alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) - - # Equation 12. reversed - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_next) - + th.sqrt(1 - alpha_bar_next) * eps - ) - - return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} - - def ddim_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Generate samples from the model using DDIM. - - Same usage as p_sample_loop(). - """ - final = None - for sample in self.ddim_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - eta=eta, - ): - final = sample - return final["sample"] - - def ddim_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Use DDIM to sample from the model and yield intermediate samples from - each timestep of DDIM. - - Same usage as p_sample_loop_progressive(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices, disable=not progress) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.ddim_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - eta=eta, - ) - yield out - img = out["sample"] - - def _vb_terms_bpd( - self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None - ): - """ - Get a term for the variational lower-bound. - - The resulting units are bits (rather than nats, as one might expect). - This allows for comparison to other papers. - - :return: a dict with the following keys: - - 'output': a shape [N] tensor of NLLs or KLs. - - 'pred_xstart': the x_0 predictions. - """ - true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance( - x_start=x_start, x_t=x_t, t=t - ) - out = self.p_mean_variance( - model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs - ) - kl = normal_kl( - true_mean, true_log_variance_clipped, out["mean"], out["log_variance"] - ) - kl = mean_flat(kl) / np.log(2.0) - - decoder_nll = -discretized_gaussian_log_likelihood( - x_start, means=out["mean"], log_scales=0.5 * out["log_variance"] - ) - assert decoder_nll.shape == x_start.shape - decoder_nll = mean_flat(decoder_nll) / np.log(2.0) - - # At the first timestep return the decoder NLL, - # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t)) - output = th.where((t == 0), decoder_nll, kl) - return {"output": output, "pred_xstart": out["pred_xstart"]} - - def training_losses(self, model, x_start, t, model_kwargs=None, noise=None): - """ - Compute training losses for a single timestep. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param t: a batch of timestep indices. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param noise: if specified, the specific Gaussian noise to try to remove. - :return: a dict with the key "loss" containing a tensor of shape [N]. - Some mean or variance settings may also have other keys. - """ - if model_kwargs is None: - model_kwargs = {} - if noise is None: - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start, t, noise=noise) - - terms = {} - - if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL: - # TODO: support multiple model outputs for this mode. - terms["loss"] = self._vb_terms_bpd( - model=model, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - model_kwargs=model_kwargs, - )["output"] - if self.loss_type == LossType.RESCALED_KL: - terms["loss"] *= self.num_timesteps - elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE: - model_outputs = model(x_t, self._scale_timesteps(t), **model_kwargs) - if isinstance(model_outputs, tuple): - model_output = model_outputs[0] - terms['extra_outputs'] = model_outputs[1:] - else: - model_output = model_outputs - - if self.model_var_type in [ - ModelVarType.LEARNED, - ModelVarType.LEARNED_RANGE, - ]: - B, C = x_t.shape[:2] - assert model_output.shape == (B, C * 2, *x_t.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - # Learn the variance using the variational bound, but don't let - # it affect our mean prediction. - frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) - terms["vb"] = self._vb_terms_bpd( - model=lambda *args, r=frozen_out: r, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - )["output"] - if self.loss_type == LossType.RESCALED_MSE: - # Divide by 1000 for equivalence with initial implementation. - # Without a factor of 1/1000, the VB term hurts the MSE term. - terms["vb"] *= self.num_timesteps / 1000.0 - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - target = self.q_posterior_mean_variance( - x_start=x_start, x_t=x_t, t=t - )[0] - x_start_pred = torch.zeros(x_start) # Not supported. - elif self.model_mean_type == ModelMeanType.START_X: - target = x_start - x_start_pred = model_output - elif self.model_mean_type == ModelMeanType.EPSILON: - target = noise - x_start_pred = self._predict_xstart_from_eps(x_t, t, model_output) - else: - raise NotImplementedError(self.model_mean_type) - assert model_output.shape == target.shape == x_start.shape - terms["mse"] = mean_flat((target - model_output) ** 2) - terms["x_start_predicted"] = x_start_pred - if "vb" in terms: - terms["loss"] = terms["mse"] + terms["vb"] - else: - terms["loss"] = terms["mse"] - else: - raise NotImplementedError(self.loss_type) - - return terms - - def autoregressive_training_losses(self, model, x_start, t, model_output_keys, gd_out_key, model_kwargs=None, noise=None): - """ - Compute training losses for a single timestep. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param t: a batch of timestep indices. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param noise: if specified, the specific Gaussian noise to try to remove. - :return: a dict with the key "loss" containing a tensor of shape [N]. - Some mean or variance settings may also have other keys. - """ - if model_kwargs is None: - model_kwargs = {} - if noise is None: - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start, t, noise=noise) - terms = {} - if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL: - assert False # not currently supported for this type of diffusion. - elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE: - model_outputs = model(x_t, x_start, self._scale_timesteps(t), **model_kwargs) - terms.update({k: o for k, o in zip(model_output_keys, model_outputs)}) - model_output = terms[gd_out_key] - if self.model_var_type in [ - ModelVarType.LEARNED, - ModelVarType.LEARNED_RANGE, - ]: - B, C = x_t.shape[:2] - assert model_output.shape == (B, C, 2, *x_t.shape[2:]) - model_output, model_var_values = model_output[:, :, 0], model_output[:, :, 1] - # Learn the variance using the variational bound, but don't let - # it affect our mean prediction. - frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) - terms["vb"] = self._vb_terms_bpd( - model=lambda *args, r=frozen_out: r, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - )["output"] - if self.loss_type == LossType.RESCALED_MSE: - # Divide by 1000 for equivalence with initial implementation. - # Without a factor of 1/1000, the VB term hurts the MSE term. - terms["vb"] *= self.num_timesteps / 1000.0 - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - target = self.q_posterior_mean_variance( - x_start=x_start, x_t=x_t, t=t - )[0] - x_start_pred = torch.zeros(x_start) # Not supported. - elif self.model_mean_type == ModelMeanType.START_X: - target = x_start - x_start_pred = model_output - elif self.model_mean_type == ModelMeanType.EPSILON: - target = noise - x_start_pred = self._predict_xstart_from_eps(x_t, t, model_output) - else: - raise NotImplementedError(self.model_mean_type) - assert model_output.shape == target.shape == x_start.shape - terms["mse"] = mean_flat((target - model_output) ** 2) - terms["x_start_predicted"] = x_start_pred - if "vb" in terms: - terms["loss"] = terms["mse"] + terms["vb"] - else: - terms["loss"] = terms["mse"] - else: - raise NotImplementedError(self.loss_type) - - return terms - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - - This term can't be optimized, as it only depends on the encoder. - - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl( - mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0 - ) - return mean_flat(kl_prior) / np.log(2.0) - - def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None): - """ - Compute the entire variational lower-bound, measured in bits-per-dim, - as well as other related quantities. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param clip_denoised: if True, clip denoised samples. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - - :return: a dict containing the following keys: - - total_bpd: the total variational lower-bound, per batch element. - - prior_bpd: the prior term in the lower-bound. - - vb: an [N x T] tensor of terms in the lower-bound. - - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep. - - mse: an [N x T] tensor of epsilon MSEs for each timestep. - """ - device = x_start.device - batch_size = x_start.shape[0] - - vb = [] - xstart_mse = [] - mse = [] - for t in list(range(self.num_timesteps))[::-1]: - t_batch = th.tensor([t] * batch_size, device=device) - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise) - # Calculate VLB term at the current timestep - with th.no_grad(): - out = self._vb_terms_bpd( - model, - x_start=x_start, - x_t=x_t, - t=t_batch, - clip_denoised=clip_denoised, - model_kwargs=model_kwargs, - ) - vb.append(out["output"]) - xstart_mse.append(mean_flat((out["pred_xstart"] - x_start) ** 2)) - eps = self._predict_eps_from_xstart(x_t, t_batch, out["pred_xstart"]) - mse.append(mean_flat((eps - noise) ** 2)) - - vb = th.stack(vb, dim=1) - xstart_mse = th.stack(xstart_mse, dim=1) - mse = th.stack(mse, dim=1) - - prior_bpd = self._prior_bpd(x_start) - total_bpd = vb.sum(dim=1) + prior_bpd - return { - "total_bpd": total_bpd, - "prior_bpd": prior_bpd, - "vb": vb, - "xstart_mse": xstart_mse, - "mse": mse, - } - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - beta_start = scale * 0.0001 - beta_end = scale * 0.02 - return np.linspace( - beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64 - ) - elif schedule_name == "cosine": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -class SpacedDiffusion(GaussianDiffusion): - """ - A diffusion process which can skip steps in a base diffusion process. - - :param use_timesteps: a collection (sequence or set) of timesteps from the - original diffusion process to retain. - :param kwargs: the kwargs to create the base diffusion process. - """ - - def __init__(self, use_timesteps, **kwargs): - self.use_timesteps = set(use_timesteps) - self.timestep_map = [] - self.original_num_steps = len(kwargs["betas"]) - - base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa - last_alpha_cumprod = 1.0 - new_betas = [] - for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod): - if i in self.use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - self.timestep_map.append(i) - kwargs["betas"] = np.array(new_betas) - super().__init__(**kwargs) - - def p_mean_variance( - self, model, *args, **kwargs - ): # pylint: disable=signature-differs - return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) - - def training_losses( - self, model, *args, **kwargs - ): # pylint: disable=signature-differs - return super().training_losses(self._wrap_model(model), *args, **kwargs) - - def autoregressive_training_losses( - self, model, *args, **kwargs - ): # pylint: disable=signature-differs - return super().autoregressive_training_losses(self._wrap_model(model, True), *args, **kwargs) - - def condition_mean(self, cond_fn, *args, **kwargs): - return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs) - - def condition_score(self, cond_fn, *args, **kwargs): - return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs) - - def _wrap_model(self, model, autoregressive=False): - if isinstance(model, _WrappedModel) or isinstance(model, _WrappedAutoregressiveModel): - return model - mod = _WrappedAutoregressiveModel if autoregressive else _WrappedModel - return mod( - model, self.timestep_map, self.rescale_timesteps, self.original_num_steps - ) - - def _scale_timesteps(self, t): - # Scaling is done by the wrapped model. - return t - - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim") :]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - - -class _WrappedModel: - def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps): - self.model = model - self.timestep_map = timestep_map - self.rescale_timesteps = rescale_timesteps - self.original_num_steps = original_num_steps - - def __call__(self, x, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - if self.rescale_timesteps: - new_ts = new_ts.float() * (1000.0 / self.original_num_steps) - return self.model(x, new_ts, **kwargs) - - -class _WrappedAutoregressiveModel: - def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps): - self.model = model - self.timestep_map = timestep_map - self.rescale_timesteps = rescale_timesteps - self.original_num_steps = original_num_steps - - def __call__(self, x, x0, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - if self.rescale_timesteps: - new_ts = new_ts.float() * (1000.0 / self.original_num_steps) - return self.model(x, x0, new_ts, **kwargs) - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py deleted file mode 100644 index 91368dda78aad590837aa12023dee67e224709ba..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py +++ /dev/null @@ -1,289 +0,0 @@ -import logging -from datetime import datetime -from logging import Handler, LogRecord -from pathlib import Path -from types import ModuleType -from typing import ClassVar, Iterable, List, Optional, Type, Union - -from pip._vendor.rich._null_file import NullFile - -from . import get_console -from ._log_render import FormatTimeCallable, LogRender -from .console import Console, ConsoleRenderable -from .highlighter import Highlighter, ReprHighlighter -from .text import Text -from .traceback import Traceback - - -class RichHandler(Handler): - """A logging handler that renders output with Rich. The time / level / message and file are displayed in columns. - The level is color coded, and the message is syntax highlighted. - - Note: - Be careful when enabling console markup in log messages if you have configured logging for libraries not - under your control. If a dependency writes messages containing square brackets, it may not produce the intended output. - - Args: - level (Union[int, str], optional): Log level. Defaults to logging.NOTSET. - console (:class:`~rich.console.Console`, optional): Optional console instance to write logs. - Default will use a global console instance writing to stdout. - show_time (bool, optional): Show a column for the time. Defaults to True. - omit_repeated_times (bool, optional): Omit repetition of the same time. Defaults to True. - show_level (bool, optional): Show a column for the level. Defaults to True. - show_path (bool, optional): Show the path to the original log call. Defaults to True. - enable_link_path (bool, optional): Enable terminal link of path column to file. Defaults to True. - highlighter (Highlighter, optional): Highlighter to style log messages, or None to use ReprHighlighter. Defaults to None. - markup (bool, optional): Enable console markup in log messages. Defaults to False. - rich_tracebacks (bool, optional): Enable rich tracebacks with syntax highlighting and formatting. Defaults to False. - tracebacks_width (Optional[int], optional): Number of characters used to render tracebacks, or None for full width. Defaults to None. - tracebacks_extra_lines (int, optional): Additional lines of code to render tracebacks, or None for full width. Defaults to None. - tracebacks_theme (str, optional): Override pygments theme used in traceback. - tracebacks_word_wrap (bool, optional): Enable word wrapping of long tracebacks lines. Defaults to True. - tracebacks_show_locals (bool, optional): Enable display of locals in tracebacks. Defaults to False. - tracebacks_suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%x %X] ". - keywords (List[str], optional): List of words to highlight instead of ``RichHandler.KEYWORDS``. - """ - - KEYWORDS: ClassVar[Optional[List[str]]] = [ - "GET", - "POST", - "HEAD", - "PUT", - "DELETE", - "OPTIONS", - "TRACE", - "PATCH", - ] - HIGHLIGHTER_CLASS: ClassVar[Type[Highlighter]] = ReprHighlighter - - def __init__( - self, - level: Union[int, str] = logging.NOTSET, - console: Optional[Console] = None, - *, - show_time: bool = True, - omit_repeated_times: bool = True, - show_level: bool = True, - show_path: bool = True, - enable_link_path: bool = True, - highlighter: Optional[Highlighter] = None, - markup: bool = False, - rich_tracebacks: bool = False, - tracebacks_width: Optional[int] = None, - tracebacks_extra_lines: int = 3, - tracebacks_theme: Optional[str] = None, - tracebacks_word_wrap: bool = True, - tracebacks_show_locals: bool = False, - tracebacks_suppress: Iterable[Union[str, ModuleType]] = (), - locals_max_length: int = 10, - locals_max_string: int = 80, - log_time_format: Union[str, FormatTimeCallable] = "[%x %X]", - keywords: Optional[List[str]] = None, - ) -> None: - super().__init__(level=level) - self.console = console or get_console() - self.highlighter = highlighter or self.HIGHLIGHTER_CLASS() - self._log_render = LogRender( - show_time=show_time, - show_level=show_level, - show_path=show_path, - time_format=log_time_format, - omit_repeated_times=omit_repeated_times, - level_width=None, - ) - self.enable_link_path = enable_link_path - self.markup = markup - self.rich_tracebacks = rich_tracebacks - self.tracebacks_width = tracebacks_width - self.tracebacks_extra_lines = tracebacks_extra_lines - self.tracebacks_theme = tracebacks_theme - self.tracebacks_word_wrap = tracebacks_word_wrap - self.tracebacks_show_locals = tracebacks_show_locals - self.tracebacks_suppress = tracebacks_suppress - self.locals_max_length = locals_max_length - self.locals_max_string = locals_max_string - self.keywords = keywords - - def get_level_text(self, record: LogRecord) -> Text: - """Get the level name from the record. - - Args: - record (LogRecord): LogRecord instance. - - Returns: - Text: A tuple of the style and level name. - """ - level_name = record.levelname - level_text = Text.styled( - level_name.ljust(8), f"logging.level.{level_name.lower()}" - ) - return level_text - - def emit(self, record: LogRecord) -> None: - """Invoked by logging.""" - message = self.format(record) - traceback = None - if ( - self.rich_tracebacks - and record.exc_info - and record.exc_info != (None, None, None) - ): - exc_type, exc_value, exc_traceback = record.exc_info - assert exc_type is not None - assert exc_value is not None - traceback = Traceback.from_exception( - exc_type, - exc_value, - exc_traceback, - width=self.tracebacks_width, - extra_lines=self.tracebacks_extra_lines, - theme=self.tracebacks_theme, - word_wrap=self.tracebacks_word_wrap, - show_locals=self.tracebacks_show_locals, - locals_max_length=self.locals_max_length, - locals_max_string=self.locals_max_string, - suppress=self.tracebacks_suppress, - ) - message = record.getMessage() - if self.formatter: - record.message = record.getMessage() - formatter = self.formatter - if hasattr(formatter, "usesTime") and formatter.usesTime(): - record.asctime = formatter.formatTime(record, formatter.datefmt) - message = formatter.formatMessage(record) - - message_renderable = self.render_message(record, message) - log_renderable = self.render( - record=record, traceback=traceback, message_renderable=message_renderable - ) - if isinstance(self.console.file, NullFile): - # Handles pythonw, where stdout/stderr are null, and we return NullFile - # instance from Console.file. In this case, we still want to make a log record - # even though we won't be writing anything to a file. - self.handleError(record) - else: - try: - self.console.print(log_renderable) - except Exception: - self.handleError(record) - - def render_message(self, record: LogRecord, message: str) -> "ConsoleRenderable": - """Render message text in to Text. - - Args: - record (LogRecord): logging Record. - message (str): String containing log message. - - Returns: - ConsoleRenderable: Renderable to display log message. - """ - use_markup = getattr(record, "markup", self.markup) - message_text = Text.from_markup(message) if use_markup else Text(message) - - highlighter = getattr(record, "highlighter", self.highlighter) - if highlighter: - message_text = highlighter(message_text) - - if self.keywords is None: - self.keywords = self.KEYWORDS - - if self.keywords: - message_text.highlight_words(self.keywords, "logging.keyword") - - return message_text - - def render( - self, - *, - record: LogRecord, - traceback: Optional[Traceback], - message_renderable: "ConsoleRenderable", - ) -> "ConsoleRenderable": - """Render log for display. - - Args: - record (LogRecord): logging Record. - traceback (Optional[Traceback]): Traceback instance or None for no Traceback. - message_renderable (ConsoleRenderable): Renderable (typically Text) containing log message contents. - - Returns: - ConsoleRenderable: Renderable to display log. - """ - path = Path(record.pathname).name - level = self.get_level_text(record) - time_format = None if self.formatter is None else self.formatter.datefmt - log_time = datetime.fromtimestamp(record.created) - - log_renderable = self._log_render( - self.console, - [message_renderable] if not traceback else [message_renderable, traceback], - log_time=log_time, - time_format=time_format, - level=level, - path=path, - line_no=record.lineno, - link_path=record.pathname if self.enable_link_path else None, - ) - return log_renderable - - -if __name__ == "__main__": # pragma: no cover - from time import sleep - - FORMAT = "%(message)s" - # FORMAT = "%(asctime)-15s - %(levelname)s - %(message)s" - logging.basicConfig( - level="NOTSET", - format=FORMAT, - datefmt="[%X]", - handlers=[RichHandler(rich_tracebacks=True, tracebacks_show_locals=True)], - ) - log = logging.getLogger("rich") - - log.info("Server starting...") - log.info("Listening on http://127.0.0.1:8080") - sleep(1) - - log.info("GET /index.html 200 1298") - log.info("GET /imgs/backgrounds/back1.jpg 200 54386") - log.info("GET /css/styles.css 200 54386") - log.warning("GET /favicon.ico 404 242") - sleep(1) - - log.debug( - "JSONRPC request\n--> %r\n<-- %r", - { - "version": "1.1", - "method": "confirmFruitPurchase", - "params": [["apple", "orange", "mangoes", "pomelo"], 1.123], - "id": "194521489", - }, - {"version": "1.1", "result": True, "error": None, "id": "194521489"}, - ) - log.debug( - "Loading configuration file /adasd/asdasd/qeqwe/qwrqwrqwr/sdgsdgsdg/werwerwer/dfgerert/ertertert/ertetert/werwerwer" - ) - log.error("Unable to find 'pomelo' in database!") - log.info("POST /jsonrpc/ 200 65532") - log.info("POST /admin/ 401 42234") - log.warning("password was rejected for admin site.") - - def divide() -> None: - number = 1 - divisor = 0 - foos = ["foo"] * 100 - log.debug("in divide") - try: - number / divisor - except: - log.exception("An error of some kind occurred!") - - divide() - sleep(1) - log.critical("Out of memory!") - log.info("Server exited with code=-1") - log.info("[bold]EXITING...[/bold]", extra=dict(markup=True)) diff --git a/spaces/pksx01/Audio-MNIST/README.md b/spaces/pksx01/Audio-MNIST/README.md deleted file mode 100644 index 1a96bd6a2a4dc139d3ffb0959f67bfe33830db8d..0000000000000000000000000000000000000000 --- a/spaces/pksx01/Audio-MNIST/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Audio MNIST -emoji: 🐨 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/sdkddkver.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/sdkddkver.h deleted file mode 100644 index 44b5fb2f158dcac8028ffe10caec492f173fd459..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/sdkddkver.h +++ /dev/null @@ -1,220 +0,0 @@ -/** - * sdkddkver.h: Version definitions for SDK and DDK. Originally - * from ReactOS PSDK/DDK, this file is in the public domain: - * - * This file has no copyright assigned and is placed in the Public Domain. - * This file is part of the mingw-w64 runtime package. - * No warranty is given; refer to the file DISCLAIMER.PD within this package. - */ - -#ifndef _INC_SDKDDKVER -#define _INC_SDKDDKVER - -/* _WIN32_WINNT */ -#define _WIN32_WINNT_NT4 0x0400 -#define _WIN32_WINNT_WIN2K 0x0500 -#define _WIN32_WINNT_WINXP 0x0501 -#define _WIN32_WINNT_WS03 0x0502 -#define _WIN32_WINNT_WIN6 0x0600 -#define _WIN32_WINNT_VISTA 0x0600 -#define _WIN32_WINNT_WS08 0x0600 -#define _WIN32_WINNT_LONGHORN 0x0600 -#define _WIN32_WINNT_WIN7 0x0601 -#define _WIN32_WINNT_WIN8 0x0602 -#define _WIN32_WINNT_WINBLUE 0x0603 -#define _WIN32_WINNT_WINTHRESHOLD 0x0A00 -#define _WIN32_WINNT_WIN10 0x0A00 - -/* _WIN32_IE */ -#define _WIN32_IE_IE20 0x0200 -#define _WIN32_IE_IE30 0x0300 -#define _WIN32_IE_IE302 0x0302 -#define _WIN32_IE_IE40 0x0400 -#define _WIN32_IE_IE401 0x0401 -#define _WIN32_IE_IE50 0x0500 -#define _WIN32_IE_IE501 0x0501 -#define _WIN32_IE_IE55 0x0550 -#define _WIN32_IE_IE60 0x0600 -#define _WIN32_IE_IE60SP1 0x0601 -#define _WIN32_IE_IE60SP2 0x0603 -#define _WIN32_IE_IE70 0x0700 -#define _WIN32_IE_IE80 0x0800 -#define _WIN32_IE_IE90 0x0900 -#define _WIN32_IE_IE100 0x0a00 -#define _WIN32_IE_IE110 0x0A00 - -/* Mappings Between IE Version and Windows Version */ -#define _WIN32_IE_NT4 _WIN32_IE_IE20 -#define _WIN32_IE_NT4SP1 _WIN32_IE_IE20 -#define _WIN32_IE_NT4SP2 _WIN32_IE_IE20 -#define _WIN32_IE_NT4SP3 _WIN32_IE_IE302 -#define _WIN32_IE_NT4SP4 _WIN32_IE_IE401 -#define _WIN32_IE_NT4SP5 _WIN32_IE_IE401 -#define _WIN32_IE_NT4SP6 _WIN32_IE_IE50 -#define _WIN32_IE_WIN98 _WIN32_IE_IE401 -#define _WIN32_IE_WIN98SE _WIN32_IE_IE50 -#define _WIN32_IE_WINME _WIN32_IE_IE55 -#define _WIN32_IE_WIN2K _WIN32_IE_IE501 -#define _WIN32_IE_WIN2KSP1 _WIN32_IE_IE501 -#define _WIN32_IE_WIN2KSP2 _WIN32_IE_IE501 -#define _WIN32_IE_WIN2KSP3 _WIN32_IE_IE501 -#define _WIN32_IE_WIN2KSP4 _WIN32_IE_IE501 -#define _WIN32_IE_XP _WIN32_IE_IE60 -#define _WIN32_IE_XPSP1 _WIN32_IE_IE60SP1 -#define _WIN32_IE_XPSP2 _WIN32_IE_IE60SP2 -#define _WIN32_IE_WS03 0x0602 -#define _WIN32_IE_WS03SP1 _WIN32_IE_IE60SP2 -#define _WIN32_IE_WIN6 _WIN32_IE_IE70 -#define _WIN32_IE_LONGHORN _WIN32_IE_IE70 -#define _WIN32_IE_WIN7 _WIN32_IE_IE80 -#define _WIN32_IE_WIN8 _WIN32_IE_IE100 -#define _WIN32_IE_WINBLUE _WIN32_IE_IE100 -#define _WIN32_IE_WINTHRESHOLD _WIN32_IE_IE110 -#define _WIN32_IE_WIN10 _WIN32_IE_IE110 - -/* NTDDI_VERSION */ -#ifndef NTDDI_WIN2K -#define NTDDI_WIN2K 0x05000000 -#endif -#ifndef NTDDI_WIN2KSP1 -#define NTDDI_WIN2KSP1 0x05000100 -#endif -#ifndef NTDDI_WIN2KSP2 -#define NTDDI_WIN2KSP2 0x05000200 -#endif -#ifndef NTDDI_WIN2KSP3 -#define NTDDI_WIN2KSP3 0x05000300 -#endif -#ifndef NTDDI_WIN2KSP4 -#define NTDDI_WIN2KSP4 0x05000400 -#endif - -#ifndef NTDDI_WINXP -#define NTDDI_WINXP 0x05010000 -#endif -#ifndef NTDDI_WINXPSP1 -#define NTDDI_WINXPSP1 0x05010100 -#endif -#ifndef NTDDI_WINXPSP2 -#define NTDDI_WINXPSP2 0x05010200 -#endif -#ifndef NTDDI_WINXPSP3 -#define NTDDI_WINXPSP3 0x05010300 -#endif -#ifndef NTDDI_WINXPSP4 -#define NTDDI_WINXPSP4 0x05010400 -#endif - -#define NTDDI_WS03 0x05020000 -#define NTDDI_WS03SP1 0x05020100 -#define NTDDI_WS03SP2 0x05020200 -#define NTDDI_WS03SP3 0x05020300 -#define NTDDI_WS03SP4 0x05020400 - -#define NTDDI_WIN6 0x06000000 -#define NTDDI_WIN6SP1 0x06000100 -#define NTDDI_WIN6SP2 0x06000200 -#define NTDDI_WIN6SP3 0x06000300 -#define NTDDI_WIN6SP4 0x06000400 - -#define NTDDI_VISTA NTDDI_WIN6 -#define NTDDI_VISTASP1 NTDDI_WIN6SP1 -#define NTDDI_VISTASP2 NTDDI_WIN6SP2 -#define NTDDI_VISTASP3 NTDDI_WIN6SP3 -#define NTDDI_VISTASP4 NTDDI_WIN6SP4 -#define NTDDI_LONGHORN NTDDI_VISTA - -#define NTDDI_WS08 NTDDI_WIN6SP1 -#define NTDDI_WS08SP2 NTDDI_WIN6SP2 -#define NTDDI_WS08SP3 NTDDI_WIN6SP3 -#define NTDDI_WS08SP4 NTDDI_WIN6SP4 - -#define NTDDI_WIN7 0x06010000 -#define NTDDI_WIN8 0x06020000 -#define NTDDI_WINBLUE 0x06030000 -#define NTDDI_WINTHRESHOLD 0x0A000000 -#define NTDDI_WIN10 0x0A000000 -#define NTDDI_WIN10_TH2 0x0A000001 -#define NTDDI_WIN10_RS1 0x0A000002 -#define NTDDI_WIN10_RS2 0x0A000003 -#define NTDDI_WIN10_RS3 0x0A000004 -#define NTDDI_WIN10_RS4 0x0A000005 -#define NTDDI_WIN10_RS5 0x0A000006 -#define NTDDI_WIN10_19H1 0x0A000007 -#define NTDDI_WIN10_VB 0x0A000008 -#define NTDDI_WIN10_MN 0x0A000009 -#define NTDDI_WIN10_FE 0x0A00000A - -#define WDK_NTDDI_VERSION NTDDI_WIN10_FE - -/* Version Fields in NTDDI_VERSION */ -#define OSVERSION_MASK 0xFFFF0000U -#define SPVERSION_MASK 0x0000FF00 -#define SUBVERSION_MASK 0x000000FF - -/* Macros to Extract Version Fields From NTDDI_VERSION */ -#define OSVER(Version) ((Version) & OSVERSION_MASK) -#define SPVER(Version) (((Version) & SPVERSION_MASK) >> 8) -#define SUBVER(Version) (((Version) & SUBVERSION_MASK)) - -/* Macros to get the NTDDI for a given WIN32 */ -#define NTDDI_VERSION_FROM_WIN32_WINNT2(Version) Version##0000 -#define NTDDI_VERSION_FROM_WIN32_WINNT(Version) NTDDI_VERSION_FROM_WIN32_WINNT2(Version) - -/* Select Default WIN32_WINNT Value */ -#if !defined(_WIN32_WINNT) && !defined(_CHICAGO_) -#define _WIN32_WINNT _WIN32_WINNT_WS03 -#endif - -/* Choose NTDDI Version */ -#ifndef NTDDI_VERSION -#ifdef _WIN32_WINNT -#define NTDDI_VERSION NTDDI_VERSION_FROM_WIN32_WINNT(_WIN32_WINNT) -#else -#define NTDDI_VERSION NTDDI_WS03 -#endif -#endif - -/* Choose WINVER Value */ -#ifndef WINVER -#ifdef _WIN32_WINNT -#define WINVER _WIN32_WINNT -#else -#define WINVER 0x0502 -#endif -#endif - -/* Choose IE Version */ -#ifndef _WIN32_IE -#ifdef _WIN32_WINNT -#if (_WIN32_WINNT <= _WIN32_WINNT_NT4) -#define _WIN32_IE _WIN32_IE_IE50 -#elif (_WIN32_WINNT <= _WIN32_WINNT_WIN2K) -#define _WIN32_IE _WIN32_IE_IE501 -#elif (_WIN32_WINNT <= _WIN32_WINNT_WINXP) -#define _WIN32_IE _WIN32_IE_IE60 -#elif (_WIN32_WINNT <= _WIN32_WINNT_WS03) -#define _WIN32_IE _WIN32_IE_WS03 -#elif (_WIN32_WINNT <= _WIN32_WINNT_VISTA) -#define _WIN32_IE _WIN32_IE_LONGHORN -#elif (_WIN32_WINNT <= _WIN32_WINNT_WIN7) -#define _WIN32_IE _WIN32_IE_WIN7 -#elif (_WIN32_WINNT <= _WIN32_WINNT_WIN8) -#define _WIN32_IE _WIN32_IE_WIN8 -#else -#define _WIN32_IE 0x0a00 -#endif -#else -#define _WIN32_IE 0x0700 -#endif -#endif - -/* Make Sure NTDDI_VERSION and _WIN32_WINNT Match */ -#if ((OSVER(NTDDI_VERSION) == NTDDI_WIN2K) && (_WIN32_WINNT != _WIN32_WINNT_WIN2K)) || \ - ((OSVER(NTDDI_VERSION) == NTDDI_WINXP) && (_WIN32_WINNT != _WIN32_WINNT_WINXP)) || \ - ((OSVER(NTDDI_VERSION) == NTDDI_WS03) && (_WIN32_WINNT != _WIN32_WINNT_WS03)) || \ - ((OSVER(NTDDI_VERSION) == NTDDI_WINXP) && (_WIN32_WINNT != _WIN32_WINNT_WINXP)) -#error NTDDI_VERSION and _WIN32_WINNT mismatch! -#endif - -#endif /* _INC_SDKDDKVER */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/FliImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/FliImagePlugin.py deleted file mode 100644 index 8f641ece998b376f250ba8ceaa79706e3207a010..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/FliImagePlugin.py +++ /dev/null @@ -1,171 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# FLI/FLC file handling. -# -# History: -# 95-09-01 fl Created -# 97-01-03 fl Fixed parser, setup decoder tile -# 98-07-15 fl Renamed offset attribute to avoid name clash -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1995-97. -# -# See the README file for information on usage and redistribution. -# - -import os - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 - -# -# decoder - - -def _accept(prefix): - return ( - len(prefix) >= 6 - and i16(prefix, 4) in [0xAF11, 0xAF12] - and i16(prefix, 14) in [0, 3] # flags - ) - - -## -# Image plugin for the FLI/FLC animation format. Use the seek -# method to load individual frames. - - -class FliImageFile(ImageFile.ImageFile): - format = "FLI" - format_description = "Autodesk FLI/FLC Animation" - _close_exclusive_fp_after_loading = False - - def _open(self): - # HEAD - s = self.fp.read(128) - if not (_accept(s) and s[20:22] == b"\x00\x00"): - msg = "not an FLI/FLC file" - raise SyntaxError(msg) - - # frames - self.n_frames = i16(s, 6) - self.is_animated = self.n_frames > 1 - - # image characteristics - self._mode = "P" - self._size = i16(s, 8), i16(s, 10) - - # animation speed - duration = i32(s, 16) - magic = i16(s, 4) - if magic == 0xAF11: - duration = (duration * 1000) // 70 - self.info["duration"] = duration - - # look for palette - palette = [(a, a, a) for a in range(256)] - - s = self.fp.read(16) - - self.__offset = 128 - - if i16(s, 4) == 0xF100: - # prefix chunk; ignore it - self.__offset = self.__offset + i32(s) - s = self.fp.read(16) - - if i16(s, 4) == 0xF1FA: - # look for palette chunk - number_of_subchunks = i16(s, 6) - chunk_size = None - for _ in range(number_of_subchunks): - if chunk_size is not None: - self.fp.seek(chunk_size - 6, os.SEEK_CUR) - s = self.fp.read(6) - chunk_type = i16(s, 4) - if chunk_type in (4, 11): - self._palette(palette, 2 if chunk_type == 11 else 0) - break - chunk_size = i32(s) - if not chunk_size: - break - - palette = [o8(r) + o8(g) + o8(b) for (r, g, b) in palette] - self.palette = ImagePalette.raw("RGB", b"".join(palette)) - - # set things up to decode first frame - self.__frame = -1 - self._fp = self.fp - self.__rewind = self.fp.tell() - self.seek(0) - - def _palette(self, palette, shift): - # load palette - - i = 0 - for e in range(i16(self.fp.read(2))): - s = self.fp.read(2) - i = i + s[0] - n = s[1] - if n == 0: - n = 256 - s = self.fp.read(n * 3) - for n in range(0, len(s), 3): - r = s[n] << shift - g = s[n + 1] << shift - b = s[n + 2] << shift - palette[i] = (r, g, b) - i += 1 - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0) - - for f in range(self.__frame + 1, frame + 1): - self._seek(f) - - def _seek(self, frame): - if frame == 0: - self.__frame = -1 - self._fp.seek(self.__rewind) - self.__offset = 128 - else: - # ensure that the previous frame was loaded - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - self.__frame = frame - - # move to next frame - self.fp = self._fp - self.fp.seek(self.__offset) - - s = self.fp.read(4) - if not s: - raise EOFError - - framesize = i32(s) - - self.decodermaxblock = framesize - self.tile = [("fli", (0, 0) + self.size, self.__offset, None)] - - self.__offset += framesize - - def tell(self): - return self.__frame - - -# -# registry - -Image.register_open(FliImageFile.format, FliImageFile, _accept) - -Image.register_extensions(FliImageFile.format, [".fli", ".flc"]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/html.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/html.py deleted file mode 100644 index 8534446a2d12c877b4552b493285b8ee858bce44..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/html.py +++ /dev/null @@ -1,72 +0,0 @@ -"""gr.HTML() component.""" - -from __future__ import annotations - -from typing import Any, Callable - -from gradio_client.documentation import document, set_documentation_group - -from gradio.components.base import Component -from gradio.events import Events - -set_documentation_group("component") - - -@document() -class HTML(Component): - """ - Used to display arbitrary HTML output. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a valid HTML {str}. - - Demos: text_analysis - Guides: key-features - """ - - EVENTS = [Events.change] - - def __init__( - self, - value: str | Callable = "", - *, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - ): - """ - Parameters: - value: Default value. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: The label for this component. Is used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: This parameter has no effect. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - """ - super().__init__( - label=label, - every=every, - show_label=show_label, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - value=value, - ) - - def example_inputs(self) -> Any: - return "

      Hello

      " - - def preprocess(self, payload: str | None) -> str | None: - return payload - - def postprocess(self, value: str | None) -> str | None: - return value - - def api_info(self) -> dict[str, Any]: - return {"type": "string"} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/build_py.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/build_py.py deleted file mode 100644 index d30dc5bf42d806e03b055627b7f813f4b772d2f5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/build_py.py +++ /dev/null @@ -1,31 +0,0 @@ -from distutils.command.build_py import build_py as old_build_py -from numpy.distutils.misc_util import is_string - -class build_py(old_build_py): - - def run(self): - build_src = self.get_finalized_command('build_src') - if build_src.py_modules_dict and self.packages is None: - self.packages = list(build_src.py_modules_dict.keys ()) - old_build_py.run(self) - - def find_package_modules(self, package, package_dir): - modules = old_build_py.find_package_modules(self, package, package_dir) - - # Find build_src generated *.py files. - build_src = self.get_finalized_command('build_src') - modules += build_src.py_modules_dict.get(package, []) - - return modules - - def find_modules(self): - old_py_modules = self.py_modules[:] - new_py_modules = [_m for _m in self.py_modules if is_string(_m)] - self.py_modules[:] = new_py_modules - modules = old_build_py.find_modules(self) - self.py_modules[:] = old_py_modules - - return modules - - # XXX: Fix find_source_files for item in py_modules such that item is 3-tuple - # and item[2] is source file. diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/date/array.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/date/array.py deleted file mode 100644 index 39accd6d223a7f716deff7e6234038ca026b6af6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/date/array.py +++ /dev/null @@ -1,184 +0,0 @@ -from __future__ import annotations - -import datetime as dt -from typing import ( - TYPE_CHECKING, - Any, - cast, -) - -import numpy as np - -from pandas.core.dtypes.dtypes import register_extension_dtype - -from pandas.api.extensions import ( - ExtensionArray, - ExtensionDtype, -) -from pandas.api.types import pandas_dtype - -if TYPE_CHECKING: - from collections.abc import Sequence - - from pandas._typing import ( - Dtype, - PositionalIndexer, - ) - - -@register_extension_dtype -class DateDtype(ExtensionDtype): - @property - def type(self): - return dt.date - - @property - def name(self): - return "DateDtype" - - @classmethod - def construct_from_string(cls, string: str): - if not isinstance(string, str): - raise TypeError( - f"'construct_from_string' expects a string, got {type(string)}" - ) - - if string == cls.__name__: - return cls() - else: - raise TypeError(f"Cannot construct a '{cls.__name__}' from '{string}'") - - @classmethod - def construct_array_type(cls): - return DateArray - - @property - def na_value(self): - return dt.date.min - - def __repr__(self) -> str: - return self.name - - -class DateArray(ExtensionArray): - def __init__( - self, - dates: ( - dt.date - | Sequence[dt.date] - | tuple[np.ndarray, np.ndarray, np.ndarray] - | np.ndarray - ), - ) -> None: - if isinstance(dates, dt.date): - self._year = np.array([dates.year]) - self._month = np.array([dates.month]) - self._day = np.array([dates.year]) - return - - ldates = len(dates) - if isinstance(dates, list): - # pre-allocate the arrays since we know the size before hand - self._year = np.zeros(ldates, dtype=np.uint16) # 65535 (0, 9999) - self._month = np.zeros(ldates, dtype=np.uint8) # 255 (1, 31) - self._day = np.zeros(ldates, dtype=np.uint8) # 255 (1, 12) - # populate them - for i, (y, m, d) in enumerate( - (date.year, date.month, date.day) for date in dates - ): - self._year[i] = y - self._month[i] = m - self._day[i] = d - - elif isinstance(dates, tuple): - # only support triples - if ldates != 3: - raise ValueError("only triples are valid") - # check if all elements have the same type - if any(not isinstance(x, np.ndarray) for x in dates): - raise TypeError("invalid type") - ly, lm, ld = (len(cast(np.ndarray, d)) for d in dates) - if not ly == lm == ld: - raise ValueError( - f"tuple members must have the same length: {(ly, lm, ld)}" - ) - self._year = dates[0].astype(np.uint16) - self._month = dates[1].astype(np.uint8) - self._day = dates[2].astype(np.uint8) - - elif isinstance(dates, np.ndarray) and dates.dtype == "U10": - self._year = np.zeros(ldates, dtype=np.uint16) # 65535 (0, 9999) - self._month = np.zeros(ldates, dtype=np.uint8) # 255 (1, 31) - self._day = np.zeros(ldates, dtype=np.uint8) # 255 (1, 12) - - # error: "object_" object is not iterable - obj = np.char.split(dates, sep="-") - for (i,), (y, m, d) in np.ndenumerate(obj): # type: ignore[misc] - self._year[i] = int(y) - self._month[i] = int(m) - self._day[i] = int(d) - - else: - raise TypeError(f"{type(dates)} is not supported") - - @property - def dtype(self) -> ExtensionDtype: - return DateDtype() - - def astype(self, dtype, copy=True): - dtype = pandas_dtype(dtype) - - if isinstance(dtype, DateDtype): - data = self.copy() if copy else self - else: - data = self.to_numpy(dtype=dtype, copy=copy, na_value=dt.date.min) - - return data - - @property - def nbytes(self) -> int: - return self._year.nbytes + self._month.nbytes + self._day.nbytes - - def __len__(self) -> int: - return len(self._year) # all 3 arrays are enforced to have the same length - - def __getitem__(self, item: PositionalIndexer): - if isinstance(item, int): - return dt.date(self._year[item], self._month[item], self._day[item]) - else: - raise NotImplementedError("only ints are supported as indexes") - - def __setitem__(self, key: int | slice | np.ndarray, value: Any) -> None: - if not isinstance(key, int): - raise NotImplementedError("only ints are supported as indexes") - - if not isinstance(value, dt.date): - raise TypeError("you can only set datetime.date types") - - self._year[key] = value.year - self._month[key] = value.month - self._day[key] = value.day - - def __repr__(self) -> str: - return f"DateArray{list(zip(self._year, self._month, self._day))}" - - def copy(self) -> DateArray: - return DateArray((self._year.copy(), self._month.copy(), self._day.copy())) - - def isna(self) -> np.ndarray: - return np.logical_and( - np.logical_and( - self._year == dt.date.min.year, self._month == dt.date.min.month - ), - self._day == dt.date.min.day, - ) - - @classmethod - def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy=False): - if isinstance(scalars, dt.date): - pass - elif isinstance(scalars, DateArray): - pass - elif isinstance(scalars, np.ndarray): - scalars = scalars.astype("U10") # 10 chars for yyyy-mm-dd - return DateArray(scalars) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/direct_url_helpers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/direct_url_helpers.py deleted file mode 100644 index 0e8e5e1608b911e789a3d346ebe48aa7cc54b79e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/direct_url_helpers.py +++ /dev/null @@ -1,87 +0,0 @@ -from typing import Optional - -from pip._internal.models.direct_url import ArchiveInfo, DirectUrl, DirInfo, VcsInfo -from pip._internal.models.link import Link -from pip._internal.utils.urls import path_to_url -from pip._internal.vcs import vcs - - -def direct_url_as_pep440_direct_reference(direct_url: DirectUrl, name: str) -> str: - """Convert a DirectUrl to a pip requirement string.""" - direct_url.validate() # if invalid, this is a pip bug - requirement = name + " @ " - fragments = [] - if isinstance(direct_url.info, VcsInfo): - requirement += "{}+{}@{}".format( - direct_url.info.vcs, direct_url.url, direct_url.info.commit_id - ) - elif isinstance(direct_url.info, ArchiveInfo): - requirement += direct_url.url - if direct_url.info.hash: - fragments.append(direct_url.info.hash) - else: - assert isinstance(direct_url.info, DirInfo) - requirement += direct_url.url - if direct_url.subdirectory: - fragments.append("subdirectory=" + direct_url.subdirectory) - if fragments: - requirement += "#" + "&".join(fragments) - return requirement - - -def direct_url_for_editable(source_dir: str) -> DirectUrl: - return DirectUrl( - url=path_to_url(source_dir), - info=DirInfo(editable=True), - ) - - -def direct_url_from_link( - link: Link, source_dir: Optional[str] = None, link_is_in_wheel_cache: bool = False -) -> DirectUrl: - if link.is_vcs: - vcs_backend = vcs.get_backend_for_scheme(link.scheme) - assert vcs_backend - url, requested_revision, _ = vcs_backend.get_url_rev_and_auth( - link.url_without_fragment - ) - # For VCS links, we need to find out and add commit_id. - if link_is_in_wheel_cache: - # If the requested VCS link corresponds to a cached - # wheel, it means the requested revision was an - # immutable commit hash, otherwise it would not have - # been cached. In that case we don't have a source_dir - # with the VCS checkout. - assert requested_revision - commit_id = requested_revision - else: - # If the wheel was not in cache, it means we have - # had to checkout from VCS to build and we have a source_dir - # which we can inspect to find out the commit id. - assert source_dir - commit_id = vcs_backend.get_revision(source_dir) - return DirectUrl( - url=url, - info=VcsInfo( - vcs=vcs_backend.name, - commit_id=commit_id, - requested_revision=requested_revision, - ), - subdirectory=link.subdirectory_fragment, - ) - elif link.is_existing_dir(): - return DirectUrl( - url=link.url_without_fragment, - info=DirInfo(), - subdirectory=link.subdirectory_fragment, - ) - else: - hash = None - hash_name = link.hash_name - if hash_name: - hash = f"{hash_name}={link.hash}" - return DirectUrl( - url=link.url_without_fragment, - info=ArchiveInfo(hash=hash), - subdirectory=link.subdirectory_fragment, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/vim.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/vim.py deleted file mode 100644 index a254bbaacb835060c33d5157d16bb2d16b813eeb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/vim.py +++ /dev/null @@ -1,62 +0,0 @@ -""" - pygments.styles.vim - ~~~~~~~~~~~~~~~~~~~ - - A highlighting style for Pygments, inspired by vim. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Whitespace, Token - - -class VimStyle(Style): - """ - Styles somewhat like vim 7.0 - """ - - background_color = "#000000" - highlight_color = "#222222" - - styles = { - Token: "#cccccc", - Whitespace: "", - Comment: "#000080", - Comment.Preproc: "", - Comment.Special: "bold #cd0000", - - Keyword: "#cdcd00", - Keyword.Declaration: "#00cd00", - Keyword.Namespace: "#cd00cd", - Keyword.Pseudo: "", - Keyword.Type: "#00cd00", - - Operator: "#3399cc", - Operator.Word: "#cdcd00", - - Name: "", - Name.Class: "#00cdcd", - Name.Builtin: "#cd00cd", - Name.Exception: "bold #666699", - Name.Variable: "#00cdcd", - - String: "#cd0000", - Number: "#cd00cd", - - Generic.Heading: "bold #000080", - Generic.Subheading: "bold #800080", - Generic.Deleted: "#cd0000", - Generic.Inserted: "#00cd00", - Generic.Error: "#FF0000", - Generic.Emph: "italic", - Generic.Strong: "bold", - Generic.EmphStrong: "bold italic", - Generic.Prompt: "bold #000080", - Generic.Output: "#888", - Generic.Traceback: "#04D", - - Error: "border:#FF0000" - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tzdata/zoneinfo/Atlantic/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tzdata/zoneinfo/Atlantic/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/psalama/UT_Hackathon/README.md b/spaces/psalama/UT_Hackathon/README.md deleted file mode 100644 index 1e74330110cc421f9aadab742a4925eced7206e1..0000000000000000000000000000000000000000 --- a/spaces/psalama/UT_Hackathon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: UT Hackathon - Level 0 -emoji: 🐢 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -duplicated_from: ljrmary/UT_Hackathon ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/spec_utils.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/spec_utils.py deleted file mode 100644 index 15a19363691cfd957a59bf15e6977400afc1f557..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/spec_utils.py +++ /dev/null @@ -1,671 +0,0 @@ -import hashlib -import json -import math -import os - -import librosa -import numpy as np -import soundfile as sf -from tqdm import tqdm - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError("h1_shape[3] must be greater than h2_shape[3]") - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def wave_to_spectrogram_mt( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - import threading - - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - def run_thread(**kwargs): - global spec_left - spec_left = librosa.stft(**kwargs) - - thread = threading.Thread( - target=run_thread, - kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length}, - ) - thread.start() - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - thread.join() - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def combine_spectrograms(specs, mp): - l = min([specs[i].shape[2] for i in specs]) - spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64) - offset = 0 - bands_n = len(mp.param["band"]) - - for d in range(1, bands_n + 1): - h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"] - spec_c[:, offset : offset + h, :l] = specs[d][ - :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l - ] - offset += h - - if offset > mp.param["bins"]: - raise ValueError("Too much bins") - - # lowpass fiter - if ( - mp.param["pre_filter_start"] > 0 - ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']: - if bands_n == 1: - spec_c = fft_lp_filter( - spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"] - ) - else: - gp = 1 - for b in range( - mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"] - ): - g = math.pow( - 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0 - ) - gp = g - spec_c[:, b, :] *= g - - return np.asfortranarray(spec_c) - - -def spectrogram_to_image(spec, mode="magnitude"): - if mode == "magnitude": - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y**2 + 1e-8) - elif mode == "phase": - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2) - - return img - - -def reduce_vocal_aggressively(X, y, softmask): - v = X - y - y_mag_tmp = np.abs(y) - v_mag_tmp = np.abs(v) - - v_mask = v_mag_tmp > y_mag_tmp - y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf) - - return y_mag * np.exp(1.0j * np.angle(y)) - - -def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError("min_range must be >= fade_area * 2") - - mag = mag.copy() - - idx = np.where(ref.mean(axis=(0, 1)) < thres)[0] - starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - uninformative = np.where(ends - starts > min_range)[0] - if len(uninformative) > 0: - starts = starts[uninformative] - ends = ends[uninformative] - old_e = None - for s, e in zip(starts, ends): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight = np.linspace(0, 1, fade_size) - mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size] - else: - s -= fade_size - - if e != mag.shape[2]: - weight = np.linspace(1, 0, fade_size) - mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e] - else: - e += fade_size - - mag[:, :, s + fade_size : e - fade_size] += ref[ - :, :, s + fade_size : e - fade_size - ] - old_e = e - - return mag - - -def align_wave_head_and_tail(a, b): - l = min([a[0].size, b[0].size]) - - return a[:l, :l], b[:l, :l] - - -def cache_or_load(mix_path, inst_path, mp): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = "mph{}".format( - hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest() - ) - mix_cache_dir = os.path.join("cache", cache_dir) - inst_cache_dir = os.path.join("cache", cache_dir) - - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy") - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy") - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X_spec_m = np.load(mix_cache_path) - y_spec_m = np.load(inst_cache_path) - else: - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - X_wave[d], _ = librosa.load( - mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"] - ) - y_wave[d], _ = librosa.load( - inst_path, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - else: # lower bands - X_wave[d] = librosa.resample( - X_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - y_wave[d] = librosa.resample( - y_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d]) - - X_spec_s[d] = wave_to_spectrogram( - X_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - y_spec_s[d] = wave_to_spectrogram( - y_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - del X_wave, y_wave - - X_spec_m = combine_spectrograms(X_spec_s, mp) - y_spec_m = combine_spectrograms(y_spec_s, mp) - - if X_spec_m.shape != y_spec_m.shape: - raise ValueError("The combined spectrograms are different: " + mix_path) - - _, ext = os.path.splitext(mix_path) - - np.save(mix_cache_path, X_spec_m) - np.save(inst_cache_path, y_spec_m) - - return X_spec_m, y_spec_m - - -def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2): - import threading - - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - def run_thread(**kwargs): - global wave_left - wave_left = librosa.istft(**kwargs) - - thread = threading.Thread( - target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length} - ) - thread.start() - wave_right = librosa.istft(spec_right, hop_length=hop_length) - thread.join() - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None): - wave_band = {} - bands_n = len(mp.param["band"]) - offset = 0 - - for d in range(1, bands_n + 1): - bp = mp.param["band"][d] - spec_s = np.ndarray( - shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex - ) - h = bp["crop_stop"] - bp["crop_start"] - spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[ - :, offset : offset + h, : - ] - - offset += h - if d == bands_n: # higher - if extra_bins_h: # if --high_end_process bypass - max_bin = bp["n_fft"] // 2 - spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[ - :, :extra_bins_h, : - ] - if bp["hpf_start"] > 0: - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - if bands_n == 1: - wave = spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - else: - wave = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - else: - sr = mp.param["band"][d + 1]["sr"] - if d == 1: # lower - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave = librosa.resample( - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - bp["sr"], - sr, - res_type="sinc_fastest", - ) - else: # mid - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave2 = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest") - wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy") - - return wave.T - - -def fft_lp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop): - g -= 1 / (bin_stop - bin_start) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, bin_stop:, :] *= 0 - - return spec - - -def fft_hp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop, -1): - g -= 1 / (bin_start - bin_stop) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, 0 : bin_stop + 1, :] *= 0 - - return spec - - -def mirroring(a, spec_m, input_high_end, mp): - if "mirroring" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mirror = mirror * np.exp(1.0j * np.angle(input_high_end)) - - return np.where( - np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror - ) - - if "mirroring2" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mi = np.multiply(mirror, input_high_end * 1.7) - - return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi) - - -def ensembling(a, specs): - for i in range(1, len(specs)): - if i == 1: - spec = specs[0] - - ln = min([spec.shape[2], specs[i].shape[2]]) - spec = spec[:, :, :ln] - specs[i] = specs[i][:, :, :ln] - - if "min_mag" == a: - spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec) - if "max_mag" == a: - spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec) - - return spec - - -def stft(wave, nfft, hl): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - spec_left = librosa.stft(wave_left, nfft, hop_length=hl) - spec_right = librosa.stft(wave_right, nfft, hop_length=hl) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def istft(spec, hl): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hl) - wave_right = librosa.istft(spec_right, hop_length=hl) - wave = np.asfortranarray([wave_left, wave_right]) - - -if __name__ == "__main__": - import argparse - import time - - import cv2 - from model_param_init import ModelParameters - - p = argparse.ArgumentParser() - p.add_argument( - "--algorithm", - "-a", - type=str, - choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"], - default="min_mag", - ) - p.add_argument( - "--model_params", - "-m", - type=str, - default=os.path.join("modelparams", "1band_sr44100_hl512.json"), - ) - p.add_argument("--output_name", "-o", type=str, default="output") - p.add_argument("--vocals_only", "-v", action="store_true") - p.add_argument("input", nargs="+") - args = p.parse_args() - - start_time = time.time() - - if args.algorithm.startswith("invert") and len(args.input) != 2: - raise ValueError("There should be two input files.") - - if not args.algorithm.startswith("invert") and len(args.input) < 2: - raise ValueError("There must be at least two input files.") - - wave, specs = {}, {} - mp = ModelParameters(args.model_params) - - for i in range(len(args.input)): - spec = {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - wave[d], _ = librosa.load( - args.input[i], - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - - if len(wave[d].shape) == 1: # mono to stereo - wave[d] = np.array([wave[d], wave[d]]) - else: # lower bands - wave[d] = librosa.resample( - wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - spec[d] = wave_to_spectrogram( - wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - specs[i] = combine_spectrograms(spec, mp) - - del wave - - if args.algorithm == "deep": - d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1]) - v_spec = d_spec - specs[1] - sf.write( - os.path.join("{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - - if args.algorithm.startswith("invert"): - ln = min([specs[0].shape[2], specs[1].shape[2]]) - specs[0] = specs[0][:, :, :ln] - specs[1] = specs[1][:, :, :ln] - - if "invert_p" == args.algorithm: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - max_mag = np.where(X_mag >= y_mag, X_mag, y_mag) - v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0])) - else: - specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2) - v_spec = specs[0] - specs[1] - - if not args.vocals_only: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - v_mag = np.abs(v_spec) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite("{}_X.png".format(args.output_name), X_image) - cv2.imwrite("{}_y.png".format(args.output_name), y_image) - cv2.imwrite("{}_v.png".format(args.output_name), v_image) - - sf.write( - "{}_X.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[0], mp), - mp.param["sr"], - ) - sf.write( - "{}_y.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[1], mp), - mp.param["sr"], - ) - - sf.write( - "{}_v.wav".format(args.output_name), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - else: - if not args.algorithm == "deep": - sf.write( - os.path.join("ensembled", "{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp), - mp.param["sr"], - ) - - if args.algorithm == "align": - trackalignment = [ - { - "file1": '"{}"'.format(args.input[0]), - "file2": '"{}"'.format(args.input[1]), - } - ] - - for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."): - os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}") - - # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1)) diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py deleted file mode 100644 index d2012ee1d27c862fe1884ae30d24138563a97664..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py +++ /dev/null @@ -1,188 +0,0 @@ -import gc -import requests -import subprocess -import sys -import os, warnings, librosa -import soundfile as sf -import numpy as np -import torch -import json - -folder = os.path.dirname(os.path.abspath(__file__)) -folder = os.path.dirname(folder) -folder = os.path.dirname(folder) -folder = os.path.dirname(folder) -now_dir = os.path.dirname(folder) - -import sys -sys.path.append(now_dir) - -import lib.infer.infer_libs.uvr5_pack.mdx as mdx -branch = "https://github.com/NaJeongMo/Colab-for-MDX_B" - -model_params = "https://raw.githubusercontent.com/TRvlvr/application_data/main/mdx_model_data/model_data.json" -_Models = "https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/" -# _models = "https://pastebin.com/raw/jBzYB8vz" -_models = "https://raw.githubusercontent.com/TRvlvr/application_data/main/filelists/download_checks.json" - - -file_folder = "Colab-for-MDX_B" -model_request = requests.get(_models).json() -model_ids = model_request["mdx_download_list"].values() -demucs_download_list = model_request["demucs_download_list"] - -# Iterate through the keys and get the model names -model_ids_demucs_inpure = [name.split(":")[1].strip() for name in demucs_download_list.keys()] - -# Remove duplicates by converting the list to a set and then back to a list -model_ids_demucs = list(set(model_ids_demucs_inpure)) - -# Remove some not working models -demucs_ids_to_delete = ["tasnet_extra", "tasnet", "light_extra", "light", "demucs_extra", "demucs", "demucs_unittest", "demucs48_hq", "repro_mdx_a_hybrid_only", "repro_mdx_a_time_only", "repro_mdx_a", "UVR Model"] - -# Add some models that are not in the list -demucs_ids_to_add = ["SIG"] - -# Add the new ID to the model_ids_demucs list - -for demucs_ids_to_add in demucs_ids_to_add: - if demucs_ids_to_add not in model_ids_demucs: - model_ids_demucs.append(demucs_ids_to_add) - -# If the ID is in the list of IDs to delete, remove it from the list of model_ids_demucs -for demucs_ids_to_delete in demucs_ids_to_delete: - if demucs_ids_to_delete in model_ids_demucs: - model_ids_demucs.remove(demucs_ids_to_delete) - -#print(model_ids) -model_params = requests.get(model_params).json() -#Remove request for stem_naming -stem_naming = { - "Vocals": "Instrumental", - "Other": "Instruments", - "Instrumental": "Vocals", - "Drums": "Drumless", - "Bass": "Bassless" -} - - -os.makedirs(f"{now_dir}/assets/uvr5_weights/MDX", exist_ok=True) - -warnings.filterwarnings("ignore") -cpu = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda:0") -elif torch.backends.mps.is_available(): - device = torch.device("mps") -else: - device = torch.device("cpu") - - -def get_model_list(): - return model_ids - -def get_demucs_model_list(): - return model_ids_demucs - -def id_to_ptm(mkey): - if mkey in model_ids: - #print(mkey) - mpath = f"{now_dir}/assets/uvr5_weights/MDX/{mkey}" - if not os.path.exists(f'{now_dir}/assets/uvr5_weights/MDX/{mkey}'): - print('Downloading model...',end=' ') - subprocess.run( - ["python", "-m", "wget", "-o", mpath, _Models+mkey] - ) - print(f'saved to {mpath}') - return mpath - else: - return mpath - else: - mpath = f'{now_dir}/assets/uvr5_weights/{mkey}' - return mpath - -def prepare_mdx(onnx,custom_param=False, dim_f=None, dim_t=None, n_fft=None, stem_name=None, compensation=None): - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - if custom_param: - assert not (dim_f is None or dim_t is None or n_fft is None or compensation is None), 'Custom parameter selected, but incomplete parameters are provided.' - mdx_model = mdx.MDX_Model( - device, - dim_f = dim_f, - dim_t = dim_t, - n_fft = n_fft, - stem_name=stem_name, - compensation=compensation - ) - else: - model_hash = mdx.MDX.get_hash(onnx) - if model_hash in model_params: - mp = model_params.get(model_hash) - mdx_model = mdx.MDX_Model( - device, - dim_f = mp["mdx_dim_f_set"], - dim_t = 2**mp["mdx_dim_t_set"], - n_fft = mp["mdx_n_fft_scale_set"], - stem_name=mp["primary_stem"], - compensation=compensation if not custom_param and compensation is not None else mp["compensate"] - ) - return mdx_model - -def run_mdx(onnx, mdx_model,filename, output_format='wav',diff=False,suffix=None,diff_suffix=None, denoise=False, m_threads=2): - mdx_sess = mdx.MDX(onnx,mdx_model) - print(f"Processing: {filename}") - if filename.lower().endswith('.wav'): - wave, sr = librosa.load(filename, mono=False, sr=44100) - else: - temp_wav = 'temp_audio.wav' - subprocess.run(['ffmpeg', '-i', filename, '-ar', '44100', '-ac', '2', temp_wav]) # Convert to WAV format - wave, sr = librosa.load(temp_wav, mono=False, sr=44100) - os.remove(temp_wav) - - #wave, sr = librosa.load(filename,mono=False, sr=44100) - # normalizing input wave gives better output - peak = max(np.max(wave), abs(np.min(wave))) - wave /= peak - if denoise: - wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads)) - wave_processed *= 0.5 - else: - wave_processed = mdx_sess.process_wave(wave, m_threads) - # return to previous peak - wave_processed *= peak - - stem_name = mdx_model.stem_name if suffix is None else suffix # use suffix if provided - save_path = os.path.basename(os.path.splitext(filename)[0]) - #vocals_save_path = os.path.join(vocals_folder, f"{save_path}_{stem_name}.{output_format}") - #instrumental_save_path = os.path.join(instrumental_folder, f"{save_path}_{stem_name}.{output_format}") - save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}" - save_path = os.path.join( - 'audios', - save_path - ) - sf.write( - save_path, - wave_processed.T, - sr - ) - - print(f'done, saved to: {save_path}') - - if diff: - diff_stem_name = stem_naming.get(stem_name) if diff_suffix is None else diff_suffix # use suffix if provided - stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name - save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}" - save_path = os.path.join( - 'audio-others', - save_path - ) - sf.write( - save_path, - (-wave_processed.T*mdx_model.compensation)+wave.T, - sr - ) - print(f'invert done, saved to: {save_path}') - del mdx_sess, wave_processed, wave - gc.collect() - -if __name__ == "__main__": - print() diff --git a/spaces/rahul2001/student_performance/src/utils.py b/spaces/rahul2001/student_performance/src/utils.py deleted file mode 100644 index 077b2cd322e4e73c26eba12393bcb805714eed38..0000000000000000000000000000000000000000 --- a/spaces/rahul2001/student_performance/src/utils.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import sys - -import numpy as np -import pandas as pd -import dill -import pickle -from sklearn.metrics import r2_score -from sklearn.model_selection import GridSearchCV - -from src.exception import CustomException - -def eval_model(true, predicted): - r2_square = r2_score(true, predicted) - return r2_square - - -def save_object(file_path , obj): - try: - dir_path = os.path.dirname(file_path) - - os.makedirs(dir_path,exist_ok= True) - - with open(file_path,"wb") as file_obj: - pickle.dump(obj,file_obj) - except Exception as e: - raise CustomException(e,sys) -def load_Obj(file_path): - try: - with open(file_path,"rb") as file_obj: - return dill.load(file_obj) - except Exception as e: - raise CustomException(e,sys) - - -def evaluate_model(X,Y,X_test,Y_test,Models,Param): - try: - report = {} - for i in range(len(list(Models))): - model = list(Models.values())[i] - para = Param[list(Models.keys())[i]] - - gs = GridSearchCV(model,para,cv=3) - gs.fit(X,Y) - - model.set_params(**gs.best_params_) - model.fit(X,Y) - - # Make predictions - y_train_pred = model.predict(X) - y_test_pred = model.predict(X_test) - - # Evaluate Train and Test dataset - model_test_r2 = eval_model(Y_test, y_test_pred) - - report[(list(Models.keys())[i])] = model_test_r2 - return report - - except Exception as e: - raise CustomException(e,sys) - \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/globals.global.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/globals.global.d.ts deleted file mode 100644 index ef1198c05024940c44e3c1a6429c26091fe2a94f..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/globals.global.d.ts +++ /dev/null @@ -1 +0,0 @@ -declare var global: typeof globalThis; diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/index.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/index.js deleted file mode 100644 index 455f9454ee6483f450a3c7b2e4570a10d6e78d39..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/index.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = -{ - parallel : require('./parallel.js'), - serial : require('./serial.js'), - serialOrdered : require('./serialOrdered.js') -}; diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/balanced-match/LICENSE.md b/spaces/rayan-saleh/whisper2notion/server/node_modules/balanced-match/LICENSE.md deleted file mode 100644 index 2cdc8e4148cc0aa1f788b25dbec4b22878644cdf..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/balanced-match/LICENSE.md +++ /dev/null @@ -1,21 +0,0 @@ -(MIT) - -Copyright (c) 2013 Julian Gruber <julian@juliangruber.com> - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies -of the Software, and to permit persons to whom the Software is furnished to do -so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Audi Mmi 2g Software Update 3 Cd REPACK Downloaden.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Audi Mmi 2g Software Update 3 Cd REPACK Downloaden.md deleted file mode 100644 index d4d1ce36d18fcd001e77da2cdbd22682d17fdd79..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Audi Mmi 2g Software Update 3 Cd REPACK Downloaden.md +++ /dev/null @@ -1,6 +0,0 @@ -

      audi mmi 2g software update 3 cd downloaden


      Download File ::: https://urlgoal.com/2uCKL1



      - -3360 (3.3.60) - Part # 4F0 906 961AB (https://thepiratebay.se/torrent/10273165/ ... I must admit I have not done an MMI update to date but I have done all sorts of tweaks to my last 3 ... This is the update CD's for the MMI 2G system. ... phone.. audi told me its impossible to update the MMI software but i knew ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Intel Visual Fortran Composer Xe 2013 Crack 16 !FULL!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Intel Visual Fortran Composer Xe 2013 Crack 16 !FULL!.md deleted file mode 100644 index f4c101b62b5072c1a9e0d95765a7b9a32506cd27..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Intel Visual Fortran Composer Xe 2013 Crack 16 !FULL!.md +++ /dev/null @@ -1,28 +0,0 @@ - -```html -

      How to Install and Use Intel Visual Fortran Composer XE 2013 16

      - -

      Intel Visual Fortran Composer XE 2013 16 is a comprehensive set of software development tools for Fortran programmers. It includes the Intel Visual Fortran Compiler, the Intel Math Kernel Library, the Intel Debugger, and the Intel Integrated Performance Primitives. With these tools, you can create high-performance applications for Windows, Linux, and Mac OS X platforms.

      -

      How to Install Intel Visual Fortran Composer XE 2013 16

      -

      To install Intel Visual Fortran Composer XE 2013 16, you need to have a valid license file and a compatible operating system. You can download the installation package from the Intel website. Follow these steps to install the software:

      -

      Download intel visual fortran composer xe 2013 crack 16


      DOWNLOAD ===> https://urlgoal.com/2uCJFI



      -
        -
      1. Run the installer and accept the license agreement.
      2. -
      3. Select the components you want to install and the installation directory.
      4. -
      5. Enter your license file or serial number when prompted.
      6. -
      7. Wait for the installation to complete and restart your computer if required.
      8. -
      -

      How to Use Intel Visual Fortran Composer XE 2013 16

      -

      To use Intel Visual Fortran Composer XE 2013 16, you need to have a compatible integrated development environment (IDE) such as Microsoft Visual Studio or Eclipse. You can also use the command-line interface to compile and run your Fortran programs. Here are some tips on how to use the software:

      -
        -
      • To create a new project in Visual Studio, select File > New > Project and choose Intel Visual Fortran > Console Application.
      • -
      • To add a new source file to your project, right-click on the project name in the Solution Explorer and select Add > New Item and choose Intel Visual Fortran > Source File.
      • -
      • To edit your source code, use the code editor window and the IntelliSense feature to get syntax highlighting, code completion, and error detection.
      • -
      • To compile your project, select Build > Build Solution or press F7. You can view the compiler output in the Output window.
      • -
      • To debug your project, select Debug > Start Debugging or press F5. You can use the Debugger window to set breakpoints, watch variables, and step through your code.
      • -
      • To run your project, select Debug > Start Without Debugging or press Ctrl+F5. You can view the program output in the Console window.
      • -
      -

      For more information on how to use Intel Visual Fortran Composer XE 2013 16, you can refer to the Getting Started Guide and the Developer Guide and Reference.

      -```

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Inventor 2010 32bit Full Crack __FULL__.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Inventor 2010 32bit Full Crack __FULL__.md deleted file mode 100644 index e59a0cdf4484721db66c772a3ab17eac3cccde90..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Inventor 2010 32bit Full Crack __FULL__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download Inventor 2010 32bit Full Crack


      Download Zip ✔✔✔ https://urlgoal.com/2uCJio



      - -ElectraX VST Electra2 Cracked Full Latest Software Free Download [2021] VST ... VST3, AAX Bit: 32bit, 64bit Tabletka: present System Requirements: Windows ... 4 templates download torrent-adds inventor professional 2010 crack 64 bit. 1fdad05405
      -
      -
      -

      diff --git a/spaces/rfrossard/langchain-chat-with-pdf/app.py b/spaces/rfrossard/langchain-chat-with-pdf/app.py deleted file mode 100644 index d9e0caecdf6b304681aed10ab8ecea3552d43606..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/langchain-chat-with-pdf/app.py +++ /dev/null @@ -1,88 +0,0 @@ -import gradio as gr - -from langchain.document_loaders import OnlinePDFLoader - -from langchain.text_splitter import CharacterTextSplitter - -from langchain.llms import HuggingFaceHub - -from langchain.embeddings import HuggingFaceHubEmbeddings - -from langchain.vectorstores import Chroma - -from langchain.chains import RetrievalQA - - - -def loading_pdf(): - return "Loading..." - -def pdf_changes(pdf_doc, repo_id): - - loader = OnlinePDFLoader(pdf_doc.name) - documents = loader.load() - text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0) - texts = text_splitter.split_documents(documents) - embeddings = HuggingFaceHubEmbeddings() - db = Chroma.from_documents(texts, embeddings) - retriever = db.as_retriever() - llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0.1, "max_new_tokens":250}) - global qa - qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True) - return "Ready" - -def add_text(history, text): - history = history + [(text, None)] - return history, "" - -def bot(history): - response = infer(history[-1][0]) - history[-1][1] = response['result'] - return history - -def infer(question): - - query = question - result = qa({"query": query}) - - return result - -css=""" -#col-container {max-width: 700px; margin-left: auto; margin-right: auto;} -""" - -title = """ -
      -

      Chat with PDF

      -

      Upload a .PDF from your computer, click the "Load PDF to LangChain" button,
      - when everything is ready, you can start asking questions about the pdf ;)

      - Duplicate Space -
      -""" - - -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.HTML(title) - - with gr.Column(): - pdf_doc = gr.File(label="Load a pdf", file_types=['.pdf'], type="file") - repo_id = gr.Dropdown(label="LLM", choices=["google/flan-ul2", "OpenAssistant/oasst-sft-1-pythia-12b", "bigscience/bloomz"], value="google/flan-ul2") - with gr.Row(): - langchain_status = gr.Textbox(label="Status", placeholder="", interactive=False) - load_pdf = gr.Button("Load pdf to langchain") - - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=350) - question = gr.Textbox(label="Question", placeholder="Type your question and hit Enter ") - submit_btn = gr.Button("Send message") - #load_pdf.click(loading_pdf, None, langchain_status, queue=False) - repo_id.change(pdf_changes, inputs=[pdf_doc, repo_id], outputs=[langchain_status], queue=False) - load_pdf.click(pdf_changes, inputs=[pdf_doc, repo_id], outputs=[langchain_status], queue=False) - question.submit(add_text, [chatbot, question], [chatbot, question]).then( - bot, chatbot, chatbot - ) - submit_btn.click(add_text, [chatbot, question], [chatbot, question]).then( - bot, chatbot, chatbot - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Assassins Creed IV Black Flag Crack OnlyRELOADEDAssassins Creed IV Black Flag Crack OnlyRELOADE Discover the Secrets of the Caribbean with this Free and Safe Crack.md b/spaces/rorallitri/biomedical-language-models/logs/Assassins Creed IV Black Flag Crack OnlyRELOADEDAssassins Creed IV Black Flag Crack OnlyRELOADE Discover the Secrets of the Caribbean with this Free and Safe Crack.md deleted file mode 100644 index 535600789545d0393c527e1f8909a86f097b0bd2..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Assassins Creed IV Black Flag Crack OnlyRELOADEDAssassins Creed IV Black Flag Crack OnlyRELOADE Discover the Secrets of the Caribbean with this Free and Safe Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Assassins Creed IV Black Flag Crack OnlyRELOADEDAssassins Creed IV Black Flag Crack OnlyRELOADE


      DOWNLOADhttps://tinurll.com/2uznd8



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Videos Download of Free Hindi Songs Discover New Music and Old Favorites.md b/spaces/rorallitri/biomedical-language-models/logs/HD Videos Download of Free Hindi Songs Discover New Music and Old Favorites.md deleted file mode 100644 index 07c9fb227a3dfb2f64e817e1340fb3858a3bc582..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Videos Download of Free Hindi Songs Discover New Music and Old Favorites.md +++ /dev/null @@ -1,19 +0,0 @@ -
      -

      With Bollywood movies sweeping the world, Bollywood music is gaining increasing popularity around the globe as well. There are many websites providing the service to stream Bollywood music online, some of them even allow us to download free Bollywood songs for offline playback. Where are the best places for Bollywood songs downloads? In this article, I will introduce 5 best sites to enjoy Bollywood music, and also a great tool to help you easily download Bollywood songs from the Internet:

      -

      free hindi songs hd videos download


      Download Zip ……… https://tinurll.com/2uzlJ9



      -

      Although there are many online downloaders and online music sites for downloading Bollywood songs, they are either with so many intrusive ads or have few download resources. So the best way to get Bollywood songs download is with a safe desktop downloader, and Free HD Video Converter Factory is highly recommended. It allows us to free download Bollywood music from over 1000 websites at a higher downloading speed.

      -

      Are you looking for good websites to download Hindi songs more than Bollywood? Try this one! PagalWorld is a pure, fast website for free downloading all Indian music. You can get all the latest Bollywood music, Hindi music, Punjabi music and etc. from the website. Also, it offers some videos and ringtone downloading service. You can easily and quickly download all Bollywood songs with no need to register. The concise interface and fast downloading speed make PagalWorld a popular free website for Bollywood music download.

      -

      Video editing used to be a long and complicated process, but today recording and editing a short video with your Smartphone is as easy as taking a selfie. A growing number of apps also offer royalty free music so their users can create perfect lip-sync videos. These apps enable you to be a part of a community of short music video creators and to produce entertaining videos in which you dance and sing to your favorite songs.

      -

      -

      The app's music collection includes some of the most popular songs at the moment, so you can even participate in different music challenges or create lip-sync videos with music from your favorite singers. Triller lets you draw over videos, apply different visual effects or add text overlays, but some of these features must be purchased, as the free version of the app offers only a limited amount of ways to edit music and video.

      -

      After you finished recording you can slow down your clips or use effects such a Black and White or Shine. You can share each video you make with the MuStar app to all popular social media platforms, which can help you get more followers. Even though the app can be downloaded from the App Store or the Google Play Store for free, if you want to use it on a constant basis, you'll have to choose your preferred subscription method.

      -

      You can add as many music tracks to your videos as you want, which means that you can combine two or more songs in a single clip. The app also features the fade-in and fade-out effect, so you can make smooth transitions between two songs. Sharing your favorite videos to Instagram or Facebook directly from Video Maker with Music Editor app, is easy, while you can also save your videos to your camera roll. The free version of the app contains only the basic music and video editing options, and in order to gain access to all features, you must select one of the available subscription plans.

      -

      Top 6 Websites to Download Karaoke Songs [Free and Paid]Are you looking for websites to download Karaoke songs? Here are the 6 best Karaoke songs download websites that offer many Karaoke versions of songs.

      -

      Available on almost all operating systems, Gaana is one of the largest music streaming services in India. You can enjoy millions of Hindi, English and other regional songs for free. Also, you are allowed to free download Bollywood songs of your favored movies and artists.

      -

      Another best site to download Hindi songs is SongMP3.desi. This website lets you browse Hindi songs by Singers, Directors, Music Directors, Composers, and Stars, which makes it easy for you to find and download your favorite Bollywood music. There is no registration required!

      -

      Hungama gives you access to Hindi music, videos, movies, TV shows, short films, and so on. You can stream Hindi songs online and download them for offline listening free without any cost. Moreover, it enables you to browse music in dark theme mode.

      -

      Djmazak is the best free website to get Bollywood songs and videos. It provides high-quality Hindi music from 128Kbps to 320Kbps. Before downloading your favorite Hindi music, it offers you the ability to preview the songs you want to download. No registration is required!

      -

      Webmusic.live has a music library of Hindi Music, Bengali Music, English Music and Sports Music. Its simple interface makes it easier to access Hindi songs. To find and download Hindi songs, just enter the song name in the search bar. Then click on the wanted song to download it off the web.

      -

      Hindi All New video songs 2022 HD. Watch online Latest Bollywood Hindi mp4 video song 2022. Download Bollywood Full HD (1920×1080) Video Songs hindi music video song. Listen new hindi songs with their awesome videos. At Mrhd stream Hindi Videos songs hd video mr Bollywood movies films trailers & Hindi dj party remix mashup songs top trending best Youtube Hindi music 1080P videos free on Mrhd.Uk

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/How English Works A Grammar Practice Book With Answers Pdf The Ultimate English Grammar Reference and Practice Tool.md b/spaces/rorallitri/biomedical-language-models/logs/How English Works A Grammar Practice Book With Answers Pdf The Ultimate English Grammar Reference and Practice Tool.md deleted file mode 100644 index 3a703184668de643b632cc89c4b584fb056f7bdf..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/How English Works A Grammar Practice Book With Answers Pdf The Ultimate English Grammar Reference and Practice Tool.md +++ /dev/null @@ -1,6 +0,0 @@ -

      How English Works A Grammar Practice Book With Answers Pdf Free Download


      Download ✵✵✵ https://tinurll.com/2uzmIo



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hp Touchpad Webos Doctor 3.0.5 [NEW] Download.md b/spaces/rorallitri/biomedical-language-models/logs/Hp Touchpad Webos Doctor 3.0.5 [NEW] Download.md deleted file mode 100644 index 0d1047995e09dc860b14ec4e98b3899ea08231a8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hp Touchpad Webos Doctor 3.0.5 [NEW] Download.md +++ /dev/null @@ -1,89 +0,0 @@ - -

      HP Touchpad WebOS Doctor 3.0.5 Download: A Guide for HP Touchpad Users

      -

      If you are looking for a way to restore your HP Touchpad to its original state, or to update it to the latest version of webOS, you might want to try HP Touchpad WebOS Doctor 3.0.5 download. This is a software tool that can help you fix various issues with your HP Touchpad, such as boot loops, freezes, errors and more. It can also help you upgrade your HP Touchpad to webOS 3.0.5, which is the latest and final version of webOS released by HP. In this article, we will tell you everything you need to know about HP Touchpad WebOS Doctor 3.0.5 download, and how to use it to restore or update your HP Touchpad.

      -

      hp touchpad webos doctor 3.0.5 download


      Download === https://tinurll.com/2uznN3



      -

      What is HP Touchpad WebOS Doctor 3.0.5?

      -

      HP Touchpad WebOS Doctor 3.0.5 is a software tool that can help you restore or update your HP Touchpad to webOS 3.0.5. It is a Java-based application that can run on Windows or Mac OS computers, and it can communicate with your HP Touchpad via USB cable. It can perform various tasks, such as:

      -
        -
      • Erasing all the data and settings on your HP Touchpad and restoring it to its factory state.
      • -
      • Reinstalling the webOS operating system on your HP Touchpad.
      • -
      • Updating the webOS operating system on your HP Touchpad to webOS 3.0.5.
      • -
      • Fixing various issues with your HP Touchpad, such as boot loops, freezes, errors and more.
      • -
      -

      HP Touchpad WebOS Doctor 3.0.5 is an official tool released by HP, and it is safe and reliable to use. However, you should always backup your data before using it, as it will erase everything on your HP Touchpad.

      -

      What are the benefits of HP Touchpad WebOS Doctor 3.0.5?

      -

      Some of the benefits of HP Touchpad WebOS Doctor 3.0.5 are:

      -

      -
        -
      • It can help you restore your HP Touchpad to its original state if you have encountered any problems with it.
      • -
      • It can help you update your HP Touchpad to webOS 3.0.5, which is the latest and final version of webOS released by HP.
      • -
      • WebOS 3.0.5 has some improvements and bug fixes over the previous versions of webOS, such as faster performance, better battery life, enhanced security and more.
      • -
      • WebOS 3.0.5 also has some new features that were not available in the previous versions of webOS, such as Bluetooth keyboard support, camera app, messaging app and more.
      • -
      • WebOS 3.0.5 also allows you to install Android on your HP Touchpad using a dual-boot method, which gives you more options and flexibility with your device.
      • -
      -

      How to download and use HP Touchpad WebOS Doctor 3.0.5?

      -

      If you want to use HP Touchpad WebOS Doctor 3.0.5 to restore or update your HP Touchpad, you need to follow these steps:

      -
        -
      1. Download HP Touchpad WebOS Doctor 3.0.5 from one of the official links provided by XDA Forums (https://forum.xda-developers.com/t/webos-doctor-3-0-0-to-3-0-5-official-links). Choose the version that matches your device (WiFi only or WiFi + 4G).
      2. -
      3. Run the downloaded file on your computer and follow the instructions to install HP Touchpad WebOS Doctor 3.0.5 on your computer.
      4. -
      5. Connect your HP Touchpad to your computer via USB cable and make sure it is fully charged.
      6. -
      7. Launch HP Touchpad WebOS Doctor 3.0.5 on your computer and follow the instructions to restore or update your HP Touchpad.
      8. -
      9. Wait for the process to complete and do not disconnect or interrupt your device until it reboots.
      10. -
      -

      Congratulations, you have successfully used HP Touchpad WebOS Doctor 3.0.5 to restore or update your HP Touchpad.

      -

      Conclusion

      -

      HP Touchpad WebOS Doctor 3.0.5 is a software tool that can help you restore or update your HP Touchpad to webOS 3.0.5. It can fix various issues with your device, as well as improve its performance and functionality with the latest version of webOS released by HP. It can also allow you to install Android on your device using a dual-boot method, which gives you more options and flexibility with your device.

      -

      How to dual-boot Android and webOS on HP Touchpad?

      -

      If you want to have more options and flexibility with your HP Touchpad, you can dual-boot Android and webOS on your device. This means that you can choose which operating system to use when you turn on your device. You can enjoy the features and benefits of both Android and webOS, and switch between them easily. However, dual-booting Android and webOS on HP Touchpad requires some technical skills and knowledge, and it can void your warranty and damage your device if done incorrectly. Therefore, you should proceed with caution and follow the instructions carefully.

      -

      There are different methods and tools to dual-boot Android and webOS on HP Touchpad, but one of the most popular and reliable ones is to use CyanogenMod 9 (CM9), which is a custom ROM based on Android 4.0 Ice Cream Sandwich. Here are the steps to dual-boot Android and webOS on HP Touchpad using CM9:

      -
        -
      1. Make sure your HP Touchpad is updated to webOS 3.0.5 using HP Touchpad WebOS Doctor 3.0.5.
      2. -
      3. Download CM9 from one of the official links provided by XDA Forums (https://forum.xda-developers.com/t/rom-official-cyanogenmod-9-nightlies-for-the-hp-touchpad.1506998).
      4. -
      5. Download Moboot from one of the official links provided by XDA Forums (https://forum.xda-developers.com/t/moboot-0-3-5-multi-bootloader-for-the-touchpad). Moboot is a bootloader that allows you to choose which operating system to boot.
      6. -
      7. Download ClockworkMod Recovery from one of the official links provided by XDA Forums (https://forum.xda-developers.com/t/recovery-clockworkmod-recovery-v6-0-1-9-touchpad). ClockworkMod Recovery is a tool that allows you to install custom ROMs on your device.
      8. -
      9. Copy CM9, Moboot and ClockworkMod Recovery files to a folder named cminstall on your HP Touchpad.
      10. -
      11. Reboot your HP Touchpad into webOS recovery mode by holding the power button and the volume up button until you see a USB symbol on the screen.
      12. -
      13. Connect your HP Touchpad to your computer via USB cable and run the ACMEInstaller file from the CM9 folder on your computer.
      14. -
      15. Wait for the installation process to complete and do not disconnect or interrupt your device until it reboots.
      16. -
      -

      Congratulations, you have successfully dual-booted Android and webOS on HP Touchpad using CM9. You can now choose which operating system to use when you turn on your device using Moboot.

      -

      How to backup and restore your data on HP Touchpad?

      -

      If you want to keep your data safe and secure on your HP Touchpad, you should backup and restore your data regularly. This will allow you to recover your data in case of any problems with your device, such as crashes, errors, malfunctions or loss. There are different ways to backup and restore your data on HP Touchpad, depending on which operating system you are using.

      -

      If you are using webOS, you can backup and restore your data using the webOS Backup app that is pre-installed on your device. This app will backup your data to your webOS profile online, and you can restore it anytime by logging into your profile. Here are the steps to backup and restore your data using webOS Backup app:

      -
        -
      • To backup your data, launch the webOS Backup app and tap on Back Up Now.
      • -
      • To restore your data, launch the webOS Backup app and tap on Restore.
      • -
      • Choose which data you want to restore and tap on Restore Now.
      • -
      -

      If you are using Android, you can backup and restore your data using various apps that are available on Google Play Store, such as Titanium Backup, Helium or Google Drive. These apps will backup your data to your SD card or online storage, and you can restore it anytime by using the same app. Here are the steps to backup and restore your data using Titanium Backup app:

      -
        -
      • To backup your data, launch the Titanium Backup app and tap on Backup/Restore.
      • -
      • Select which apps or data you want to backup and tap on Run the batch operation.
      • -
      • To restore your data, launch the Titanium Backup app and tap on Backup/Restore.
      • -
      • Select which apps or data you want to restore and tap on Run the batch operation.
      • -
      -

      Note: You need to have root access on your device to use Titanium Backup app.

      -

      How to troubleshoot common issues with HP Touchpad WebOS Doctor 3.0.5?

      -

      While HP Touchpad WebOS Doctor 3.0.5 is a useful tool that can help you restore or update your HP Touchpad, you might encounter some issues or errors while using it. Here are some of the common issues with HP Touchpad WebOS Doctor 3.0.5 and how to troubleshoot them:

      -
        -
      • HP Touchpad WebOS Doctor 3.0.5 does not recognize your device. This might happen if your device is not in webOS recovery mode, or if your USB cable or port is faulty. To fix this, make sure your device is in webOS recovery mode by holding the power button and the volume up button until you see a USB symbol on the screen. Also, try using a different USB cable or port, or rebooting your computer.
      • -
      • HP Touchpad WebOS Doctor 3.0.5 gets stuck or freezes during the process. This might happen if your internet connection is unstable, or if your device has low battery power. To fix this, make sure your internet connection is stable and reliable, and that your device is fully charged before using HP Touchpad WebOS Doctor 3.0.5.
      • -
      • HP Touchpad WebOS Doctor 3.0.5 fails to complete the process or gives an error message. This might happen if your device has corrupted files or partitions, or if HP Touchpad WebOS Doctor 3.0.5 is corrupted or outdated. To fix this, try using a different version of HP Touchpad WebOS Doctor 3.0.5 from one of the official links provided by XDA Forums (https://forum.xda-developers.com/t/webos-doctor-3-0-0-to-3-0-5-official-links). Also, try formatting your device using webOS recovery mode before using HP Touchpad WebOS Doctor 3.0.5.
      • -
      -

      If none of these solutions work, you might need to contact HP support or visit a service center for further assistance.

      -

      How to get the most out of your HP Touchpad?

      -

      If you have successfully restored or updated your HP Touchpad to webOS 3.0.5 using HP Touchpad WebOS Doctor 3.0.5, you might want to know how to get the most out of your device. Here are some tips on how to use and enjoy your HP Touchpad:

      -
        -
      • Explore the features and apps of webOS 3.0.5, such as Bluetooth keyboard support, camera app, messaging app and more.
      • -
      • Download more apps and games from the webOS App Catalog or from third-party sources, such as Preware or webOS Quick Install.
      • -
      • Customize your device with themes, wallpapers, sounds and tweaks using webOS Theme Manager or webOS Tweaks.
      • -
      • Sync your device with your online accounts, such as Google, Facebook, Dropbox and more using Synergy or other apps.
      • -
      • Dual-boot Android and webOS on your device using CM9 or other custom ROMs and enjoy the best of both worlds.
      • -
      -

      With these tips, you can make your HP Touchpad more fun and functional.

      -

      Conclusion

      -

      HP Touchpad WebOS Doctor 3.0.5 is a software tool that can help you restore or update your HP Touchpad to webOS 3.0.5. It can fix various issues with your device, as well as improve its performance and functionality with the latest version of webOS released by HP. It can also allow you to install Android on your device using a dual-boot method, which gives you more options and flexibility with your device.

      -

      Conclusion

      -

      HP Touchpad WebOS Doctor 3.0.5 is a software tool that can help you restore or update your HP Touchpad to webOS 3.0.5. It can fix various issues with your device, as well as improve its performance and functionality with the latest version of webOS released by HP. It can also allow you to install Android on your device using a dual-boot method, which gives you more options and flexibility with your device. However, HP Touchpad WebOS Doctor 3.0.5 can also encounter some issues or errors while using it, and you need to troubleshoot them carefully. Moreover, HP Touchpad WebOS Doctor 3.0.5 can erase all your data on your device, so you need to backup and restore your data regularly. If you want to get the most out of your HP Touchpad, you need to explore the features and apps of webOS 3.0.5, as well as dual-boot Android and webOS on your device. HP Touchpad WebOS Doctor 3.0.5 is a useful tool that can help you enjoy your HP Touchpad more.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/ruangguru/rg-ds-chatbot-gradio/README.md b/spaces/ruangguru/rg-ds-chatbot-gradio/README.md deleted file mode 100644 index ae6a252c9fd989529154368aacf95713de17d081..0000000000000000000000000000000000000000 --- a/spaces/ruangguru/rg-ds-chatbot-gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rg Chatbot Gradio -emoji: 👀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/runa91/bite_gradio/README.md b/spaces/runa91/bite_gradio/README.md deleted file mode 100644 index c7864295d1a19a0e6f2f08f0848274eab2a4d17f..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BITE -emoji: 👀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.34.0 -app_file: ./scripts/gradio_demo.py -pinned: true -python_version: 3.7.6 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ryu-akm/PetVision_37/model.py b/spaces/ryu-akm/PetVision_37/model.py deleted file mode 100644 index 52c2696c874740179528f0bdae8ce87b774a138f..0000000000000000000000000000000000000000 --- a/spaces/ryu-akm/PetVision_37/model.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import torchvision - -from torch import nn - - -def create_effnetb2_model(num_classes:int=3, - seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - # Create EffNetB2 pretrained weights, transforms and model - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - # Freeze all layers in base model - for param in model.parameters(): - param.requires_grad = False - - # Change classifier head with random seed for reproducibility - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms diff --git a/spaces/safi842/FashionGen/netdissect/server.py b/spaces/safi842/FashionGen/netdissect/server.py deleted file mode 100644 index d8422a2bad5ac2a09d4582a98da4f962dac1a911..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/server.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -import argparse, connexion, os, sys, yaml, json, socket -from netdissect.easydict import EasyDict -from flask import send_from_directory, redirect -from flask_cors import CORS - - -from netdissect.serverstate import DissectionProject - -__author__ = 'Hendrik Strobelt, David Bau' - -CONFIG_FILE_NAME = 'dissect.json' -projects = {} - -app = connexion.App(__name__, debug=False) - - -def get_all_projects(): - res = [] - for key, project in projects.items(): - # print key - res.append({ - 'project': key, - 'info': { - 'layers': [layer['layer'] for layer in project.get_layers()] - } - }) - return sorted(res, key=lambda x: x['project']) - -def get_layers(project): - return { - 'request': {'project': project}, - 'res': projects[project].get_layers() - } - -def get_units(project, layer): - return { - 'request': {'project': project, 'layer': layer}, - 'res': projects[project].get_units(layer) - } - -def get_rankings(project, layer): - return { - 'request': {'project': project, 'layer': layer}, - 'res': projects[project].get_rankings(layer) - } - -def get_levels(project, layer, quantiles): - return { - 'request': {'project': project, 'layer': layer, 'quantiles': quantiles}, - 'res': projects[project].get_levels(layer, quantiles) - } - -def get_channels(project, layer): - answer = dict(channels=projects[project].get_channels(layer)) - return { - 'request': {'project': project, 'layer': layer}, - 'res': answer - } - -def post_generate(gen_req): - project = gen_req['project'] - zs = gen_req.get('zs', None) - ids = gen_req.get('ids', None) - return_urls = gen_req.get('return_urls', False) - assert (zs is None) != (ids is None) # one or the other, not both - ablations = gen_req.get('ablations', []) - interventions = gen_req.get('interventions', None) - # no z avilable if ablations - generated = projects[project].generate_images(zs, ids, interventions, - return_urls=return_urls) - return { - 'request': gen_req, - 'res': generated - } - -def post_features(feat_req): - project = feat_req['project'] - ids = feat_req['ids'] - masks = feat_req.get('masks', None) - layers = feat_req.get('layers', None) - interventions = feat_req.get('interventions', None) - features = projects[project].get_features( - ids, masks, layers, interventions) - return { - 'request': feat_req, - 'res': features - } - -def post_featuremaps(feat_req): - project = feat_req['project'] - ids = feat_req['ids'] - layers = feat_req.get('layers', None) - interventions = feat_req.get('interventions', None) - featuremaps = projects[project].get_featuremaps( - ids, layers, interventions) - return { - 'request': feat_req, - 'res': featuremaps - } - -@app.route('/client/') -def send_static(path): - """ serves all files from ./client/ to ``/client/`` - - :param path: path from api call - """ - return send_from_directory(args.client, path) - -@app.route('/data/') -def send_data(path): - """ serves all files from the data dir to ``/dissect/`` - - :param path: path from api call - """ - print('Got the data route for', path) - return send_from_directory(args.data, path) - - -@app.route('/') -def redirect_home(): - return redirect('/client/index.html', code=302) - - -def load_projects(directory): - """ - searches for CONFIG_FILE_NAME in all subdirectories of directory - and creates data handlers for all of them - - :param directory: scan directory - :return: null - """ - project_dirs = [] - # Don't search more than 2 dirs deep. - search_depth = 2 + directory.count(os.path.sep) - for root, dirs, files in os.walk(directory): - if CONFIG_FILE_NAME in files: - project_dirs.append(root) - # Don't get subprojects under a project dir. - del dirs[:] - elif root.count(os.path.sep) >= search_depth: - del dirs[:] - for p_dir in project_dirs: - print('Loading %s' % os.path.join(p_dir, CONFIG_FILE_NAME)) - with open(os.path.join(p_dir, CONFIG_FILE_NAME), 'r') as jf: - config = EasyDict(json.load(jf)) - dh_id = os.path.split(p_dir)[1] - projects[dh_id] = DissectionProject( - config=config, - project_dir=p_dir, - path_url='data/' + os.path.relpath(p_dir, directory), - public_host=args.public_host) - -app.add_api('server.yaml') - -# add CORS support -CORS(app.app, headers='Content-Type') - -parser = argparse.ArgumentParser() -parser.add_argument("--nodebug", default=False) -parser.add_argument("--address", default="127.0.0.1") # 0.0.0.0 for nonlocal use -parser.add_argument("--port", default="5001") -parser.add_argument("--public_host", default=None) -parser.add_argument("--nocache", default=False) -parser.add_argument("--data", type=str, default='dissect') -parser.add_argument("--client", type=str, default='client_dist') - -if __name__ == '__main__': - args = parser.parse_args() - for d in [args.data, args.client]: - if not os.path.isdir(d): - print('No directory %s' % d) - sys.exit(1) - args.data = os.path.abspath(args.data) - args.client = os.path.abspath(args.client) - if args.public_host is None: - args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port)) - app.run(port=int(args.port), debug=not args.nodebug, host=args.address, - use_reloader=False) -else: - args, _ = parser.parse_known_args() - if args.public_host is None: - args.public_host = '%s:%d' % (socket.getfqdn(), int(args.port)) - load_projects(args.data) diff --git a/spaces/samuelinferences/TabPFN/TabPFN/scripts/tabular_evaluation.py b/spaces/samuelinferences/TabPFN/TabPFN/scripts/tabular_evaluation.py deleted file mode 100644 index cd7f36e32948f80d0e266b47828df5f51fe3f78e..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/TabPFN/TabPFN/scripts/tabular_evaluation.py +++ /dev/null @@ -1,284 +0,0 @@ -import time -import os -from pathlib import Path - -from tqdm import tqdm -import random -import numpy as np - -from torch import nn - -from utils import torch_nanmean -from datasets import * -from model_builder import load_model -from scripts.tabular_baselines import get_scoring_string -from scripts import tabular_metrics -from scripts.transformer_prediction_interface import * -from scripts.baseline_prediction_interface import * -""" -=============================== -PUBLIC FUNCTIONS FOR EVALUATION -=============================== -""" - - -def eval_model(i, e, valid_datasets, test_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - metrics_test, config_sample, model_path = eval_model_on_ds(i, e, test_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - metrics_valid, _, _ = eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - return {'mean_auc_test': metrics_test['mean_roc_at_1000'], 'mean_auc_valid': metrics_valid['mean_roc_at_1000'], 'mean_ce_test': metrics_test['mean_ce_at_1000'], 'mean_ce_valid': metrics_valid['mean_ce_at_1000'], 'config_sample': config_sample, 'model_path': model_path} - -def eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - - # How to use: evaluate_without_fitting(i,0,valid_datasets, [1024], 100000, add_name=model_string, base_path=base_path,) - def check_file(e): - model_file = f'models_diff/prior_diff_real_checkpoint{add_name}_n_{i}_epoch_{e}.cpkt' - model_path = os.path.join(base_path, model_file) - # print('Evaluate ', model_path) - results_file = os.path.join(base_path, - f'models_diff/prior_diff_real_results{add_name}_n_{i}_epoch_{e}_{eval_addition}.pkl') - if not Path(model_path).is_file(): # or Path(results_file).is_file(): - # print('checkpoint exists: ', Path(model_file).is_file(), ', results are written:', Path(results_file).is_file()) - return None, None, None - return model_file, model_path, results_file - - if e == -1: # use last checkpoint, if e == -1 - for e_ in range(100, -1, -1): - model_file_, model_path_, results_file_ = check_file(e_) - if model_file_ is not None: - e = e_ - model_file, model_path, results_file = model_file_, model_path_, results_file_ - break - else: - model_file, model_path, results_file = check_file(e) - - model, config_sample = load_model(base_path, model_file, device, None, verbose=False) - print(model[2].style_encoder) - - params = {'max_features': config_sample['num_features'] - , 'rescale_features': config_sample["normalize_by_used_features"] - , 'normalize_to_ranking': config_sample["normalize_to_ranking"] - , 'normalize_with_sqrt': config_sample.get("normalize_with_sqrt", False) - } - metrics_valid = evaluate(datasets=valid_datasets, model=model[2], method='transformer', device=device, overwrite=True, - extend_features=True - # just removed the style keyword but transformer is trained with style, just empty - , save=False - , metric_used=tabular_metrics.cross_entropy - , return_tensor=True - , verbose=False - , eval_positions=eval_positions - , bptt=bptt - , base_path=None - , inference_mode=True - , **params - , **kwargs) - - tabular_metrics.calculate_score_per_method(tabular_metrics.auc_metric, 'roc', metrics_valid, valid_datasets, eval_positions) - tabular_metrics.calculate_score_per_method(tabular_metrics.cross_entropy, 'ce', metrics_valid, valid_datasets, eval_positions) - - return metrics_valid, config_sample, model_path - - -def evaluate(datasets, bptt, eval_positions, metric_used, model - , verbose=False - , return_tensor=False - , **kwargs): - """ - Evaluates a list of datasets for a model function. - - :param datasets: List of datasets - :param bptt: maximum sequence length - :param eval_positions: List of positions where to evaluate models - :param verbose: If True, is verbose. - :param metric_used: Which metric is optimized for. - :param return_tensor: Wheater to return results as a pytorch.tensor or numpy, this is only relevant for transformer. - :param kwargs: - :return: - """ - overall_result = {'metric_used': get_scoring_string(metric_used) - , 'bptt': bptt - , 'eval_positions': eval_positions} - - aggregated_metric_datasets, num_datasets = torch.tensor(0.0), 0 - - # For each dataset - for [ds_name, X, y, categorical_feats, _, _] in tqdm.tqdm(datasets, desc='Iterate over datasets') if verbose else datasets: - dataset_bptt = min(len(X), bptt) - # if verbose and dataset_bptt < bptt: - # print(f'Dataset too small for given sequence length, reducing to {len(X)} ({bptt})') - - aggregated_metric, num = torch.tensor(0.0), 0 - ds_result = {} - - for eval_position in (eval_positions if verbose else eval_positions): - eval_position_real = int(dataset_bptt * 0.5) if 2 * eval_position > dataset_bptt else eval_position - eval_position_bptt = int(eval_position_real * 2.0) - - r = evaluate_position(X, y, model=model - , num_classes=len(torch.unique(y)) - , categorical_feats = categorical_feats - , bptt = eval_position_bptt - , ds_name=ds_name - , eval_position = eval_position_real - , metric_used = metric_used - ,**kwargs) - - if r is None: - continue - - _, outputs, ys, best_configs, time_used = r - - if torch.is_tensor(outputs): - outputs = outputs.to(outputs.device) - ys = ys.to(outputs.device) - - ys = ys.T - ds_result[f'{ds_name}_best_configs_at_{eval_position}'] = best_configs - ds_result[f'{ds_name}_outputs_at_{eval_position}'] = outputs - ds_result[f'{ds_name}_ys_at_{eval_position}'] = ys - ds_result[f'{ds_name}_time_at_{eval_position}'] = time_used - - new_metric = torch_nanmean(torch.stack([metric_used(ys[i], outputs[i]) for i in range(ys.shape[0])])) - - if not return_tensor: - make_scalar = lambda x: float(x.detach().cpu().numpy()) if (torch.is_tensor(x) and (len(x.shape) == 0)) else x - new_metric = make_scalar(new_metric) - ds_result = {k: make_scalar(ds_result[k]) for k in ds_result.keys()} - - lib = torch if return_tensor else np - if not lib.isnan(new_metric).any(): - aggregated_metric, num = aggregated_metric + new_metric, num + 1 - - overall_result.update(ds_result) - if num > 0: - aggregated_metric_datasets, num_datasets = (aggregated_metric_datasets + (aggregated_metric / num)), num_datasets + 1 - - overall_result['mean_metric'] = aggregated_metric_datasets / num_datasets - - return overall_result - -""" -=============================== -INTERNAL HELPER FUNCTIONS -=============================== -""" - -def check_file_exists(path): - """Checks if a pickle file exists. Returns None if not, else returns the unpickled file.""" - if (os.path.isfile(path)): - print(f'loading results from {path}') - with open(path, 'rb') as f: - return np.load(f, allow_pickle=True).tolist() - return None - -def generate_valid_split(X, y, bptt, eval_position, split_number=1): - """Generates a deteministic train-(test/valid) split. Both splits must contain the same classes and all classes in - the entire datasets. If no such split can be sampled in 7 passes, returns None. - - :param X: torch tensor, feature values - :param y: torch tensor, class values - :param bptt: Number of samples in train + test - :param eval_position: Number of samples in train, i.e. from which index values are in test - :param split_number: The split id - :return: - """ - done, seed = False, 13 - - torch.manual_seed(split_number) - perm = torch.randperm(X.shape[0]) if split_number > 1 else torch.arange(0, X.shape[0]) - X, y = X[perm], y[perm] - - while not done: - if seed > 20: - return None, None # No split could be generated in 7 passes, return None - random.seed(seed) - i = random.randint(0, len(X) - bptt) if len(X) - bptt > 0 else 0 - y_ = y[i:i + bptt] - - # Checks if all classes from dataset are contained and classes in train and test are equal (contain same - # classes) and - done = len(torch.unique(y_)) == len(torch.unique(y)) - done = done and torch.all(torch.unique(y_) == torch.unique(y)) - done = done and len(torch.unique(y_[:eval_position])) == len(torch.unique(y_[eval_position:])) - done = done and torch.all(torch.unique(y_[:eval_position]) == torch.unique(y_[eval_position:])) - seed = seed + 1 - - eval_xs = torch.stack([X[i:i + bptt].clone()], 1) - eval_ys = torch.stack([y[i:i + bptt].clone()], 1) - - return eval_xs, eval_ys - - -def evaluate_position(X, y, categorical_feats, model, bptt - , eval_position, overwrite, save, base_path, path_interfix, method, ds_name, fetch_only=False - , max_time=300, split_number=1 - , per_step_normalization=False, **kwargs): - """ - Evaluates a dataset with a 'bptt' number of training samples. - - :param X: Dataset X - :param y: Dataset labels - :param categorical_feats: Indices of categorical features. - :param model: Model function - :param bptt: Sequence length. - :param eval_position: Number of training samples. - :param overwrite: Wheater to ove - :param overwrite: If True, results on disk are overwritten. - :param save: - :param path_interfix: Used for constructing path to write on disk. - :param method: Model name. - :param ds_name: Datset name. - :param fetch_only: Wheater to calculate or only fetch results. - :param per_step_normalization: - :param kwargs: - :return: - """ - - if save: - path = os.path.join(base_path, f'results/tabular/{path_interfix}/results_{method}_{ds_name}_{eval_position}_{bptt}_{split_number}.npy') - #log_path = - - ## Load results if on disk - if not overwrite: - result = check_file_exists(path) - if result is not None: - if not fetch_only: - print(f'Loaded saved result for {path}') - return result - elif fetch_only: - print(f'Could not load saved result for {path}') - return None - - ## Generate data splits - eval_xs, eval_ys = generate_valid_split(X, y, bptt, eval_position, split_number=split_number) - if eval_xs is None: - return None - print(f"No dataset could be generated {ds_name} {bptt}") - - eval_ys = (eval_ys > torch.unique(eval_ys).unsqueeze(0)).sum(axis=1).unsqueeze(-1) - - start_time = time.time() - - if isinstance(model, nn.Module): # Two separate predict interfaces for transformer and baselines - outputs, best_configs = transformer_predict(model, eval_xs, eval_ys, eval_position, categorical_feats=categorical_feats, **kwargs), None - else: - _, outputs, best_configs = baseline_predict(model, eval_xs, eval_ys, categorical_feats - , eval_pos=eval_position - , max_time=max_time, **kwargs) - - eval_ys = eval_ys[eval_position:] - if outputs is None: - return None - - if torch.is_tensor(outputs): # Transfers data to cpu for saving - outputs = outputs.cpu() - eval_ys = eval_ys.cpu() - - ds_result = None, outputs, eval_ys, best_configs, time.time() - start_time - - if save: - with open(path, 'wb') as f: - np.save(f, ds_result) - print(f'saved results to {path}') - - return ds_result \ No newline at end of file diff --git a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/__init__.py b/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Layers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Preprocessing/papercup_features.py b/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Preprocessing/papercup_features.py deleted file mode 100644 index eaac7cd8f306bb06290aa2d83a92c2feb75fcc41..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/Preprocessing/papercup_features.py +++ /dev/null @@ -1,637 +0,0 @@ -# Derived from an open-source resource provided by Papercup Technologies Limited -# Resource-Author: Marlene Staib -# Modified by Florian Lux, 2021 - -def generate_feature_lookup(): - return { - '~': {'symbol_type': 'silence'}, - '#': {'symbol_type': 'end of sentence'}, - '?': {'symbol_type': 'questionmark'}, - '!': {'symbol_type': 'exclamationmark'}, - '.': {'symbol_type': 'fullstop'}, - 'ɜ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'unrounded', - }, - 'ɫ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'lateral-approximant', - }, - 'ə': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'mid', - 'vowel_roundedness': 'unrounded', - }, - 'ɚ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'mid', - 'vowel_roundedness': 'unrounded', - }, - 'a': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'open', - 'vowel_roundedness': 'unrounded', - }, - 'ð': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'dental', - 'consonant_manner': 'fricative' - }, - 'ɛ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'unrounded', - }, - 'ɪ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front_central', - 'vowel_openness' : 'close_close-mid', - 'vowel_roundedness': 'unrounded', - }, - 'ᵻ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'close', - 'vowel_roundedness': 'unrounded', - }, - 'ŋ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'nasal' - }, - 'ɔ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'rounded', - }, - 'ɒ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'open', - 'vowel_roundedness': 'rounded', - }, - 'ɾ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'tap' - }, - 'ʃ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'postalveolar', - 'consonant_manner': 'fricative' - }, - 'θ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'dental', - 'consonant_manner': 'fricative' - }, - 'ʊ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central_back', - 'vowel_openness' : 'close_close-mid', - 'vowel_roundedness': 'unrounded' - }, - 'ʌ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'unrounded' - }, - 'ʒ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'postalveolar', - 'consonant_manner': 'fricative' - }, - 'æ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'open-mid_open', - 'vowel_roundedness': 'unrounded' - }, - 'b': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'bilabial', - 'consonant_manner': 'stop' - }, - 'ʔ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'glottal', - 'consonant_manner': 'stop' - }, - 'd': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'stop' - }, - 'e': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'close-mid', - 'vowel_roundedness': 'unrounded' - }, - 'f': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'labiodental', - 'consonant_manner': 'fricative' - }, - 'g': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'stop' - }, - 'h': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'glottal', - 'consonant_manner': 'fricative' - }, - 'i': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'close', - 'vowel_roundedness': 'unrounded' - }, - 'j': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'approximant' - }, - 'k': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'stop' - }, - 'l': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'lateral-approximant' - }, - 'm': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'bilabial', - 'consonant_manner': 'nasal' - }, - 'n': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'nasal' - }, - 'ɳ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'nasal' - }, - 'o': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'close-mid', - 'vowel_roundedness': 'rounded' - }, - 'p': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'bilabial', - 'consonant_manner': 'stop' - }, - 'ɡ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'stop' - }, - 'ɹ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'approximant' - }, - 'r': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'trill' - }, - 's': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'fricative' - }, - 't': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'stop' - }, - 'u': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'close', - 'vowel_roundedness': 'rounded', - }, - 'v': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'labiodental', - 'consonant_manner': 'fricative' - }, - 'w': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'labial-velar', - 'consonant_manner': 'approximant' - }, - 'x': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'fricative' - }, - 'z': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolar', - 'consonant_manner': 'fricative' - }, - 'ʀ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'uvular', - 'consonant_manner': 'trill' - }, - 'ø': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'close-mid', - 'vowel_roundedness': 'rounded' - }, - 'ç': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'fricative' - }, - 'ɐ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'open', - 'vowel_roundedness': 'unrounded' - }, - 'œ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'rounded' - }, - 'y': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front', - 'vowel_openness' : 'close', - 'vowel_roundedness': 'rounded' - }, - 'ʏ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'front_central', - 'vowel_openness' : 'close_close-mid', - 'vowel_roundedness': 'rounded' - }, - 'ɑ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'back', - 'vowel_openness' : 'open', - 'vowel_roundedness': 'unrounded' - }, - 'c': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'stop' - }, - 'ɲ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'nasal' - }, - 'ɣ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'velar', - 'consonant_manner': 'fricative' - }, - 'ʎ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'lateral-approximant' - }, - 'β': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'bilabial', - 'consonant_manner': 'fricative' - }, - 'ʝ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'fricative' - }, - 'ɟ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'stop' - }, - 'q': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'uvular', - 'consonant_manner': 'stop' - }, - 'ɕ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'alveolopalatal', - 'consonant_manner': 'fricative' - }, - 'ʲ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', - 'consonant_manner': 'approximant' - }, - 'ɭ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'palatal', # should be retroflex, but palatal should be close enough - 'consonant_manner': 'lateral-approximant' - }, - 'ɵ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'open-mid', - 'vowel_roundedness': 'rounded' - }, - 'ʑ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'alveolopalatal', - 'consonant_manner': 'fricative' - }, - 'ʋ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'labiodental', - 'consonant_manner': 'approximant' - }, - 'ʁ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'voiced', - 'consonant_place' : 'uvular', - 'consonant_manner': 'fricative' - }, - 'ɨ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'vowel', - 'VUV' : 'voiced', - 'vowel_frontness' : 'central', - 'vowel_openness' : 'close', - 'vowel_roundedness': 'unrounded' - }, - 'ʂ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'palatal', # should be retroflex, but palatal should be close enough - 'consonant_manner': 'fricative' - }, - 'ɬ': { - 'symbol_type' : 'phoneme', - 'vowel_consonant' : 'consonant', - 'VUV' : 'unvoiced', - 'consonant_place' : 'alveolar', # should be noted it's also lateral, but should be close enough - 'consonant_manner': 'fricative' - }, - } # REMEMBER to also add the phonemes added here to the ID lookup table in the TextFrontend as the new highest ID - - -def generate_feature_table(): - ipa_to_phonemefeats = generate_feature_lookup() - - feat_types = set() - for ipa in ipa_to_phonemefeats: - if len(ipa) == 1: - [feat_types.add(feat) for feat in ipa_to_phonemefeats[ipa].keys()] - - feat_to_val_set = dict() - for feat in feat_types: - feat_to_val_set[feat] = set() - for ipa in ipa_to_phonemefeats: - if len(ipa) == 1: - for feat in ipa_to_phonemefeats[ipa]: - feat_to_val_set[feat].add(ipa_to_phonemefeats[ipa][feat]) - - # print(feat_to_val_set) - - value_list = set() - for val_set in [feat_to_val_set[feat] for feat in feat_to_val_set]: - for value in val_set: - value_list.add(value) - # print("{") - # for index, value in enumerate(list(value_list)): - # print('"{}":{},'.format(value,index)) - # print("}") - - value_to_index = { - "dental" : 0, - "postalveolar" : 1, - "mid" : 2, - "close-mid" : 3, - "vowel" : 4, - "silence" : 5, - "consonant" : 6, - "close" : 7, - "velar" : 8, - "stop" : 9, - "palatal" : 10, - "nasal" : 11, - "glottal" : 12, - "central" : 13, - "back" : 14, - "approximant" : 15, - "uvular" : 16, - "open-mid" : 17, - "front_central" : 18, - "front" : 19, - "end of sentence" : 20, - "labiodental" : 21, - "close_close-mid" : 22, - "labial-velar" : 23, - "unvoiced" : 24, - "central_back" : 25, - "trill" : 26, - "rounded" : 27, - "open-mid_open" : 28, - "tap" : 29, - "alveolar" : 30, - "bilabial" : 31, - "phoneme" : 32, - "open" : 33, - "fricative" : 34, - "unrounded" : 35, - "lateral-approximant": 36, - "voiced" : 37, - "questionmark" : 38, - "exclamationmark" : 39, - "fullstop" : 40, - "alveolopalatal" : 41 - } - - phone_to_vector = dict() - for ipa in ipa_to_phonemefeats: - if len(ipa) == 1: - phone_to_vector[ipa] = [0] * sum([len(values) for values in [feat_to_val_set[feat] for feat in feat_to_val_set]]) - for feat in ipa_to_phonemefeats[ipa]: - if ipa_to_phonemefeats[ipa][feat] in value_to_index: - phone_to_vector[ipa][value_to_index[ipa_to_phonemefeats[ipa][feat]]] = 1 - - for feat in feat_to_val_set: - for value in feat_to_val_set[feat]: - if value not in value_to_index: - print(f"Unknown feature value in featureset! {value}") - - # print(f"{sum([len(values) for values in [feat_to_val_set[feat] for feat in feat_to_val_set]])} should be 42") - - return phone_to_vector - - -def generate_phone_to_id_lookup(): - ipa_to_phonemefeats = generate_feature_lookup() - count = 0 - phone_to_id = dict() - for key in sorted(list(ipa_to_phonemefeats)): # careful: non-deterministic - phone_to_id[key] = count - count += 1 - return phone_to_id - - -if __name__ == '__main__': - print(generate_phone_to_id_lookup()) diff --git a/spaces/sarinam/speaker-anonymization-gan/anonymization/__init__.py b/spaces/sarinam/speaker-anonymization-gan/anonymization/__init__.py deleted file mode 100644 index ea9ee1c4c0f49b268ac2d80dafcd141d46cfbc9f..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization-gan/anonymization/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .demo_pool_anonymizer import DemoPoolAnonymizer -from .demo_random_anonymizer import DemoRandomAnonymizer -from .demo_gan_anonymizer import DemoGANAnonymizer diff --git a/spaces/sarinam/speaker-anonymization/README.md b/spaces/sarinam/speaker-anonymization/README.md deleted file mode 100644 index d15d92c7f5efc9b225adfaf2b578149ac20becce..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Speaker Anonymization -emoji: 🐠 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/Lalita Ke Aansoo Pdf Download BEST.md b/spaces/scedlatioru/img-to-music/example/Lalita Ke Aansoo Pdf Download BEST.md deleted file mode 100644 index d27d30573a691690dc9f2745fb5c485c9fd3a0da..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Lalita Ke Aansoo Pdf Download BEST.md +++ /dev/null @@ -1,8 +0,0 @@ -

      Lalita Ke Aansoo Pdf Download


      Download File ->>->>->> https://gohhs.com/2uEyW0



      - -A book of epic poetry in Hindi called Lalita Ke Ansu written by Krant M.L. Verma, was published in 1978. This book is a tragic story about death. The story of Shri Shiva and Lakshmi. -Siddhi (sky) - Lord Shiva, a great saint and yogi, he perfectly mastered the techniques of meditation and achieved siddhi, which means that he became capable of great manifestations of power in meditation. -He was known as guru guru, guru teacher, guru guru guru and guru guru guru. He is very famous as "Shiva-Nataraja". 8a78ff9644
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Saeed B Niku Introduction To Robotics Pdf Free Download.md b/spaces/scedlatioru/img-to-music/example/Saeed B Niku Introduction To Robotics Pdf Free Download.md deleted file mode 100644 index 8e2e958c7ab01c2096ad29dc6d66e6adbebb783c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Saeed B Niku Introduction To Robotics Pdf Free Download.md +++ /dev/null @@ -1,22 +0,0 @@ -

      saeed b niku introduction to robotics pdf free download


      Download Zip ---> https://gohhs.com/2uEA7E



      -
      -PDE model for heat conduction in the spherical. That is, even at the event horizon, we can still observe the quantum vacuum fluctuations by looking in the opposite direction and we can see the scattering waves. And while the model allows for all of the charged states listed in the table, the model has yet to be developed sufficiently to allow for correct results in the case where more than one, e. - -The original K41 model derived by Kraichnan [4] assumed that the dispersion relationship for a turbulent flow of a homogeneous, isotropic and incompressible fluid is given bywhere the constant of proportionality c is called the Kolmogorov constant. - -A sample file is in the attached spreadsheet document. As in any model, we must test the significance of the obtained values by looking at the results of a set of statistical experiments and this is why computer programs are often used. - -The excited states are represented by Dirac matrices. In this model, the inertial range is defined as the region where the term sine λ 2 Dt is smaller than the term c cosine λ Dt. - -Physics, Mechanics, and Propulsion (2008) By Greifer and Spence, this study provides an update on the physical significance of the factors derived by Kraichnan, and how these relate to the physics of the mixing of fluids and gases. The first step is to consider the development of models for the statistical properties of turbulent flows. - -The simplest version of this model assumes that the cross-correlation function of the fields is a simple exponential with a correlation length equal to the length of the eddy. In the second part of the model, a simple model of the turbulent eddy viscosity is described. - -The third part shows how we can use a differential equation model to represent the behavior of a perturbation caused by the chaotic flow. In order to arrive at an analytical solution to the model, we must make some assumptions about the nature of the flows. - -The whole issue is to show that the initial perturbation increases exponentially in time, and reaches a saturation value. This approach is very interesting, because we can derive a powerful analytical solution for the statistics of turbulent flows by means of elementary differential equations. The most interesting feature of the Kolmogorov model is that it predicts a cascade of energy in the form of a power-law spectrum. - -The angular frequency is called the characteristic frequency or 4fefd39f24
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Tecno Camon IClick IN6 Flash File MT6763 Frp Dead Fix Customer Care File.md b/spaces/scedlatioru/img-to-music/example/Tecno Camon IClick IN6 Flash File MT6763 Frp Dead Fix Customer Care File.md deleted file mode 100644 index 83288e69e5b884fc135944e8d53096c3d11e68c5..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Tecno Camon IClick IN6 Flash File MT6763 Frp Dead Fix Customer Care File.md +++ /dev/null @@ -1,17 +0,0 @@ -

      Tecno Camon iClick IN6 Flash File MT6763 Frp Dead Fix Customer Care File


      DOWNLOAD ☆☆☆☆☆ https://gohhs.com/2uEzZe



      - -... Christa Tecno Camon iClick IN6 Flash File MT6763 Frp Dead Fix Customer Care File db3a3b59a1 dinamica de sistemas y control eronini pdf download 2021. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. Read -completely. -Christa Tecno Camon iClick IN5 Flash File MT6739 Frp Dead fix Customer Care file db3a3b59a1 dinamica de sistemas y control eronini pdf download 2021. -6. 7. 8. 9. 10. . -8. 9. 10. -11. 12. -13. 14. -15. 16. -17. 18. -19. 20. -21. 22. -23. 24. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/WordFast Pro 3.0.r Crack.rar.md b/spaces/scedlatioru/img-to-music/example/WordFast Pro 3.0.r Crack.rar.md deleted file mode 100644 index 7fbef6dbbafe5788eb0413a8057378c257f60e37..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/WordFast Pro 3.0.r Crack.rar.md +++ /dev/null @@ -1,65 +0,0 @@ - -

      WordFast Pro 3.0.r Crack: A Powerful Tool for Translators

      - -

      If you are looking for a fast and reliable translation software, you might want to try WordFast Pro 3.0.r crack. This is a cracked version of WordFast Pro, a popular and professional translation memory (TM) tool that supports multiple file formats, languages, and platforms. WordFast Pro 3.0.r crack allows you to enjoy all the features of the original software without paying for a license.

      -

      WordFast Pro 3.0.r crack.rar


      DOWNLOADhttps://gohhs.com/2uEA9Y



      - -

      What is WordFast Pro?

      - -

      WordFast Pro is a standalone, multi-platform TM tool that can help you translate faster and more accurately. It has a user-friendly interface that lets you choose between a tag mark-up editor or a WYSIWYG (what you see is what you get) editor. You can also customize your shortcut keys and preferences according to your needs.

      - -

      WordFast Pro can handle various file types, such as MS Word, Excel, PowerPoint, HTML, XML, PDF, and more. You can also chain files together and work on multilingual projects with ease. WordFast Pro supports over 100 languages and allows you to access unlimited TM and glossary resources. You can also integrate it with machine translation engines and online dictionaries for extra assistance.

      - -

      What are the benefits of WordFast Pro 3.0.r crack?

      - -

      WordFast Pro 3.0.r crack is a free download that gives you access to the full version of WordFast Pro without any limitations. You can use it on Windows, Mac, or Linux operating systems and enjoy its advanced features. Some of the benefits of WordFast Pro 3.0.r crack are:

      - -
        -
      • You can save money on buying a license and use it for other purposes.
      • -
      • You can work on any project without worrying about expiration dates or activation codes.
      • -
      • You can update your software whenever a new version is available and get the latest improvements and bug fixes.
      • -
      • You can share your software with other translators and collaborate on projects more easily.
      • -
      - -

      How to download and install WordFast Pro 3.0.r crack?

      - -

      If you want to try WordFast Pro 3.0.r crack, you can follow these simple steps:

      - -
        -
      1. Go to one of the websites that offer WordFast Pro 3.0.r crack and click on the download link.
      2. -
      3. Extract the zip file and run the setup.exe file.
      4. -
      5. Follow the installation instructions and choose your preferred language and location.
      6. -
      7. Copy the crack file from the folder and paste it into the installation directory.
      8. -
      9. Launch WordFast Pro and start translating.
      10. -
      - -

      Conclusion

      - -

      WordFast Pro 3.0.r crack is a great option for translators who want to use a powerful TM tool without spending a lot of money. It has all the features of the original software and works on any platform. However, you should be aware of the possible risks of using cracked software, such as viruses, malware, or legal issues. If you want to support the developers and get official support and updates, you should consider buying a license from their website.

      -

      -

      What are the drawbacks of WordFast Pro 3.0.r crack?

      - -

      While WordFast Pro 3.0.r crack may seem like a tempting option for translators who want to save money and time, it also comes with some drawbacks that you should be aware of. Some of the disadvantages of WordFast Pro 3.0.r crack are:

      - -
        -
      • You may expose your computer to viruses, malware, or spyware that can harm your system or steal your data.
      • -
      • You may violate the intellectual property rights of the developers and face legal consequences or penalties.
      • -
      • You may not get any official support or updates from the developers and encounter bugs or errors that can affect your work quality.
      • -
      • You may damage your reputation and credibility as a professional translator and lose clients or opportunities.
      • -
      - -

      What are the alternatives to WordFast Pro 3.0.r crack?

      - -

      If you want to use a reliable and ethical TM tool, you should avoid using WordFast Pro 3.0.r crack and look for other alternatives. Some of the options you can consider are:

      - -
        -
      • Buy a license from the official website of WordFast Pro and enjoy all the benefits of the original software.
      • -
      • Use a free or open-source TM tool, such as OmegaT, Anaphraseus, or MateCat, that can offer similar features and functionality.
      • -
      • Use an online TM tool, such as Google Translator Toolkit, SDL Trados Studio Online, or Memsource Cloud, that can provide cloud-based storage and access.
      • -
      - -

      Conclusion

      - -

      WordFast Pro 3.0.r crack is a cracked version of WordFast Pro, a powerful TM tool that can help you translate faster and more accurately. It has all the features of the original software and works on any platform. However, it also has some drawbacks, such as viruses, malware, legal issues, or lack of support and updates. If you want to use a reliable and ethical TM tool, you should avoid using WordFast Pro 3.0.r crack and look for other alternatives.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/seduerr/text_analytics/text_analytics/utils/utils.py b/spaces/seduerr/text_analytics/text_analytics/utils/utils.py deleted file mode 100644 index 870a1cc4bb913d709148e30fe6f5c1274ff3d78e..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/utils/utils.py +++ /dev/null @@ -1,37 +0,0 @@ -import spacy - -from spacy.tokens import Doc -from spacy.tokens import Span -from spacy.tokens import Token -from typing import List - -from text_analytics.constants import ACCEPTED_LANGUAGES - - -def split_text_into_paragraphs(text: str) -> List[str]: - text_aux = text.strip() - paragraphs = text_aux.split('\n\n') # Strip any leading whitespaces - - for p in paragraphs: - p = p.strip() - - return [p.strip() for p in paragraphs if len(p) > 0] # Don't count empty paragraphs - - -def split_text_into_sentences(text: str) -> List[str]: - nlp = spacy.load('en_core_web_sm', disable=['tagger', 'parser', 'ner']) - nlp.add_pipe('sentencizer') - text_spacy = nlp(text) - return [str(sentence) for sentence in text_spacy.sents] - - -def is_content_word(token: Token) -> bool: - result = token.is_alpha and token.pos_ in ['PROPN', 'NOUN', 'VERB', 'ADJ', 'ADV'] - return result - - -def is_word(token: Token) -> bool: - return token.is_alpha - -def split_doc_into_sentences(doc: Doc) -> List[Span]: - return [s for s in doc.sents if len(s.text.strip()) > 0] \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/iterators/sequence_iter_factory.py b/spaces/segments-tobias/conex/espnet2/iterators/sequence_iter_factory.py deleted file mode 100644 index 48f61f8c7dfa57530c889bd718a5f44c8c6e1060..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/iterators/sequence_iter_factory.py +++ /dev/null @@ -1,143 +0,0 @@ -from typing import Any -from typing import Sequence -from typing import Union - -import numpy as np -from torch.utils.data import DataLoader -from typeguard import check_argument_types - -from espnet2.iterators.abs_iter_factory import AbsIterFactory -from espnet2.samplers.abs_sampler import AbsSampler - - -class RawSampler(AbsSampler): - def __init__(self, batches): - self.batches = batches - - def __len__(self): - return len(self.batches) - - def __iter__(self): - return iter(self.batches) - - def generate(self, seed): - return list(self.batches) - - -class SequenceIterFactory(AbsIterFactory): - """Build iterator for each epoch. - - This class simply creates pytorch DataLoader except for the following points: - - The random seed is decided according to the number of epochs. This feature - guarantees reproducibility when resuming from middle of training process. - - Enable to restrict the number of samples for one epoch. This features - controls the interval number between training and evaluation. - - """ - - def __init__( - self, - dataset, - batches: Union[AbsSampler, Sequence[Sequence[Any]]], - num_iters_per_epoch: int = None, - seed: int = 0, - shuffle: bool = False, - num_workers: int = 0, - collate_fn=None, - pin_memory: bool = False, - ): - assert check_argument_types() - - if not isinstance(batches, AbsSampler): - self.sampler = RawSampler(batches) - else: - self.sampler = batches - - self.dataset = dataset - self.num_iters_per_epoch = num_iters_per_epoch - self.shuffle = shuffle - self.seed = seed - self.num_workers = num_workers - self.collate_fn = collate_fn - # https://discuss.pytorch.org/t/what-is-the-disadvantage-of-using-pin-memory/1702 - self.pin_memory = pin_memory - - def build_iter(self, epoch: int, shuffle: bool = None) -> DataLoader: - if shuffle is None: - shuffle = self.shuffle - - if self.num_iters_per_epoch is not None: - N = len(self.sampler) - # If corpus size is larger than the num_per_epoch - if self.num_iters_per_epoch < N: - N = len(self.sampler) - real_epoch, offset = divmod(self.num_iters_per_epoch * epoch, N) - - if offset >= self.num_iters_per_epoch: - current_batches = self.sampler.generate(real_epoch + self.seed) - if shuffle: - np.random.RandomState(real_epoch + self.seed).shuffle( - current_batches - ) - batches = current_batches[ - offset - self.num_iters_per_epoch : offset - ] - else: - prev_batches = self.sampler.generate(real_epoch - 1 + self.seed) - current_batches = self.sampler.generate(real_epoch + self.seed) - if shuffle: - np.random.RandomState(real_epoch - 1 + self.seed).shuffle( - prev_batches - ) - np.random.RandomState(real_epoch + self.seed).shuffle( - current_batches - ) - batches = ( - prev_batches[offset - self.num_iters_per_epoch :] - + current_batches[:offset] - ) - - # If corpus size is less than the num_per_epoch - else: - _epoch, _cursor = divmod(self.num_iters_per_epoch * (epoch - 1), N) - _remain = self.num_iters_per_epoch - batches = [] - current_batches = self.sampler.generate(_epoch + self.seed) - if shuffle: - np.random.RandomState(_epoch + self.seed).shuffle(current_batches) - while _remain > 0: - - _batches = current_batches[_cursor : _cursor + _remain] - batches += _batches - if _cursor + _remain >= N: - _epoch += 1 - _cursor = 0 - current_batches = self.sampler.generate(_epoch + self.seed) - if shuffle: - np.random.RandomState(_epoch + self.seed).shuffle( - current_batches - ) - else: - _cursor = _cursor + _remain - _remain -= len(_batches) - - assert len(batches) == self.num_iters_per_epoch - - else: - batches = self.sampler.generate(epoch + self.seed) - if shuffle: - np.random.RandomState(epoch + self.seed).shuffle(batches) - - # For backward compatibility for pytorch DataLoader - if self.collate_fn is not None: - kwargs = dict(collate_fn=self.collate_fn) - else: - kwargs = {} - - return DataLoader( - dataset=self.dataset, - batch_sampler=batches, - num_workers=self.num_workers, - pin_memory=self.pin_memory, - **kwargs, - ) diff --git a/spaces/segments/panoptic-segment-anything-api/segment_anything/segment_anything/utils/__init__.py b/spaces/segments/panoptic-segment-anything-api/segment_anything/segment_anything/utils/__init__.py deleted file mode 100644 index 5277f46157403e47fd830fc519144b97ef69d4ae..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/segment_anything/segment_anything/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/sh0kul/DTPDC-Deploy/README.md b/spaces/sh0kul/DTPDC-Deploy/README.md deleted file mode 100644 index 5a70d43bc909a9921362d95e339e49c7bad38b8f..0000000000000000000000000000000000000000 --- a/spaces/sh0kul/DTPDC-Deploy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DefectDetection AIbuilders -emoji: 🏃 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_panoptic.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_panoptic.py deleted file mode 100644 index a76c999f96c58b2f44ab363a55dcc1c8c7f1b074..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_panoptic.py +++ /dev/null @@ -1,390 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.file_io import PathManager - -ADE20K_150_CATEGORIES = [ - {"color": [120, 120, 120], "id": 0, "isthing": 0, "name": "wall"}, - {"color": [180, 120, 120], "id": 1, "isthing": 0, "name": "building"}, - {"color": [6, 230, 230], "id": 2, "isthing": 0, "name": "sky"}, - {"color": [80, 50, 50], "id": 3, "isthing": 0, "name": "floor"}, - {"color": [4, 200, 3], "id": 4, "isthing": 0, "name": "tree"}, - {"color": [120, 120, 80], "id": 5, "isthing": 0, "name": "ceiling"}, - {"color": [140, 140, 140], "id": 6, "isthing": 0, "name": "road, route"}, - {"color": [204, 5, 255], "id": 7, "isthing": 1, "name": "bed"}, - {"color": [230, 230, 230], "id": 8, "isthing": 1, "name": "window "}, - {"color": [4, 250, 7], "id": 9, "isthing": 0, "name": "grass"}, - {"color": [224, 5, 255], "id": 10, "isthing": 1, "name": "cabinet"}, - {"color": [235, 255, 7], "id": 11, "isthing": 0, "name": "sidewalk, pavement"}, - {"color": [150, 5, 61], "id": 12, "isthing": 1, "name": "person"}, - {"color": [120, 120, 70], "id": 13, "isthing": 0, "name": "earth, ground"}, - {"color": [8, 255, 51], "id": 14, "isthing": 1, "name": "door"}, - {"color": [255, 6, 82], "id": 15, "isthing": 1, "name": "table"}, - {"color": [143, 255, 140], "id": 16, "isthing": 0, "name": "mountain, mount"}, - {"color": [204, 255, 4], "id": 17, "isthing": 0, "name": "plant"}, - {"color": [255, 51, 7], "id": 18, "isthing": 1, "name": "curtain"}, - {"color": [204, 70, 3], "id": 19, "isthing": 1, "name": "chair"}, - {"color": [0, 102, 200], "id": 20, "isthing": 1, "name": "car"}, - {"color": [61, 230, 250], "id": 21, "isthing": 0, "name": "water"}, - {"color": [255, 6, 51], "id": 22, "isthing": 1, "name": "painting, picture"}, - {"color": [11, 102, 255], "id": 23, "isthing": 1, "name": "sofa"}, - {"color": [255, 7, 71], "id": 24, "isthing": 1, "name": "shelf"}, - {"color": [255, 9, 224], "id": 25, "isthing": 0, "name": "house"}, - {"color": [9, 7, 230], "id": 26, "isthing": 0, "name": "sea"}, - {"color": [220, 220, 220], "id": 27, "isthing": 1, "name": "mirror"}, - {"color": [255, 9, 92], "id": 28, "isthing": 0, "name": "rug"}, - {"color": [112, 9, 255], "id": 29, "isthing": 0, "name": "field"}, - {"color": [8, 255, 214], "id": 30, "isthing": 1, "name": "armchair"}, - {"color": [7, 255, 224], "id": 31, "isthing": 1, "name": "seat"}, - {"color": [255, 184, 6], "id": 32, "isthing": 1, "name": "fence"}, - {"color": [10, 255, 71], "id": 33, "isthing": 1, "name": "desk"}, - {"color": [255, 41, 10], "id": 34, "isthing": 0, "name": "rock, stone"}, - {"color": [7, 255, 255], "id": 35, "isthing": 1, "name": "wardrobe, closet, press"}, - {"color": [224, 255, 8], "id": 36, "isthing": 1, "name": "lamp"}, - {"color": [102, 8, 255], "id": 37, "isthing": 1, "name": "tub"}, - {"color": [255, 61, 6], "id": 38, "isthing": 1, "name": "rail"}, - {"color": [255, 194, 7], "id": 39, "isthing": 1, "name": "cushion"}, - {"color": [255, 122, 8], "id": 40, "isthing": 0, "name": "base, pedestal, stand"}, - {"color": [0, 255, 20], "id": 41, "isthing": 1, "name": "box"}, - {"color": [255, 8, 41], "id": 42, "isthing": 1, "name": "column, pillar"}, - {"color": [255, 5, 153], "id": 43, "isthing": 1, "name": "signboard, sign"}, - { - "color": [6, 51, 255], - "id": 44, - "isthing": 1, - "name": "chest of drawers, chest, bureau, dresser", - }, - {"color": [235, 12, 255], "id": 45, "isthing": 1, "name": "counter"}, - {"color": [160, 150, 20], "id": 46, "isthing": 0, "name": "sand"}, - {"color": [0, 163, 255], "id": 47, "isthing": 1, "name": "sink"}, - {"color": [140, 140, 140], "id": 48, "isthing": 0, "name": "skyscraper"}, - {"color": [250, 10, 15], "id": 49, "isthing": 1, "name": "fireplace"}, - {"color": [20, 255, 0], "id": 50, "isthing": 1, "name": "refrigerator, icebox"}, - {"color": [31, 255, 0], "id": 51, "isthing": 0, "name": "grandstand, covered stand"}, - {"color": [255, 31, 0], "id": 52, "isthing": 0, "name": "path"}, - {"color": [255, 224, 0], "id": 53, "isthing": 1, "name": "stairs"}, - {"color": [153, 255, 0], "id": 54, "isthing": 0, "name": "runway"}, - {"color": [0, 0, 255], "id": 55, "isthing": 1, "name": "case, display case, showcase, vitrine"}, - { - "color": [255, 71, 0], - "id": 56, - "isthing": 1, - "name": "pool table, billiard table, snooker table", - }, - {"color": [0, 235, 255], "id": 57, "isthing": 1, "name": "pillow"}, - {"color": [0, 173, 255], "id": 58, "isthing": 1, "name": "screen door, screen"}, - {"color": [31, 0, 255], "id": 59, "isthing": 0, "name": "stairway, staircase"}, - {"color": [11, 200, 200], "id": 60, "isthing": 0, "name": "river"}, - {"color": [255, 82, 0], "id": 61, "isthing": 0, "name": "bridge, span"}, - {"color": [0, 255, 245], "id": 62, "isthing": 1, "name": "bookcase"}, - {"color": [0, 61, 255], "id": 63, "isthing": 0, "name": "blind, screen"}, - {"color": [0, 255, 112], "id": 64, "isthing": 1, "name": "coffee table"}, - { - "color": [0, 255, 133], - "id": 65, - "isthing": 1, - "name": "toilet, can, commode, crapper, pot, potty, stool, throne", - }, - {"color": [255, 0, 0], "id": 66, "isthing": 1, "name": "flower"}, - {"color": [255, 163, 0], "id": 67, "isthing": 1, "name": "book"}, - {"color": [255, 102, 0], "id": 68, "isthing": 0, "name": "hill"}, - {"color": [194, 255, 0], "id": 69, "isthing": 1, "name": "bench"}, - {"color": [0, 143, 255], "id": 70, "isthing": 1, "name": "countertop"}, - {"color": [51, 255, 0], "id": 71, "isthing": 1, "name": "stove"}, - {"color": [0, 82, 255], "id": 72, "isthing": 1, "name": "palm, palm tree"}, - {"color": [0, 255, 41], "id": 73, "isthing": 1, "name": "kitchen island"}, - {"color": [0, 255, 173], "id": 74, "isthing": 1, "name": "computer"}, - {"color": [10, 0, 255], "id": 75, "isthing": 1, "name": "swivel chair"}, - {"color": [173, 255, 0], "id": 76, "isthing": 1, "name": "boat"}, - {"color": [0, 255, 153], "id": 77, "isthing": 0, "name": "bar"}, - {"color": [255, 92, 0], "id": 78, "isthing": 1, "name": "arcade machine"}, - {"color": [255, 0, 255], "id": 79, "isthing": 0, "name": "hovel, hut, hutch, shack, shanty"}, - {"color": [255, 0, 245], "id": 80, "isthing": 1, "name": "bus"}, - {"color": [255, 0, 102], "id": 81, "isthing": 1, "name": "towel"}, - {"color": [255, 173, 0], "id": 82, "isthing": 1, "name": "light"}, - {"color": [255, 0, 20], "id": 83, "isthing": 1, "name": "truck"}, - {"color": [255, 184, 184], "id": 84, "isthing": 0, "name": "tower"}, - {"color": [0, 31, 255], "id": 85, "isthing": 1, "name": "chandelier"}, - {"color": [0, 255, 61], "id": 86, "isthing": 1, "name": "awning, sunshade, sunblind"}, - {"color": [0, 71, 255], "id": 87, "isthing": 1, "name": "street lamp"}, - {"color": [255, 0, 204], "id": 88, "isthing": 1, "name": "booth"}, - {"color": [0, 255, 194], "id": 89, "isthing": 1, "name": "tv"}, - {"color": [0, 255, 82], "id": 90, "isthing": 1, "name": "plane"}, - {"color": [0, 10, 255], "id": 91, "isthing": 0, "name": "dirt track"}, - {"color": [0, 112, 255], "id": 92, "isthing": 1, "name": "clothes"}, - {"color": [51, 0, 255], "id": 93, "isthing": 1, "name": "pole"}, - {"color": [0, 194, 255], "id": 94, "isthing": 0, "name": "land, ground, soil"}, - { - "color": [0, 122, 255], - "id": 95, - "isthing": 1, - "name": "bannister, banister, balustrade, balusters, handrail", - }, - { - "color": [0, 255, 163], - "id": 96, - "isthing": 0, - "name": "escalator, moving staircase, moving stairway", - }, - { - "color": [255, 153, 0], - "id": 97, - "isthing": 1, - "name": "ottoman, pouf, pouffe, puff, hassock", - }, - {"color": [0, 255, 10], "id": 98, "isthing": 1, "name": "bottle"}, - {"color": [255, 112, 0], "id": 99, "isthing": 0, "name": "buffet, counter, sideboard"}, - { - "color": [143, 255, 0], - "id": 100, - "isthing": 0, - "name": "poster, posting, placard, notice, bill, card", - }, - {"color": [82, 0, 255], "id": 101, "isthing": 0, "name": "stage"}, - {"color": [163, 255, 0], "id": 102, "isthing": 1, "name": "van"}, - {"color": [255, 235, 0], "id": 103, "isthing": 1, "name": "ship"}, - {"color": [8, 184, 170], "id": 104, "isthing": 1, "name": "fountain"}, - { - "color": [133, 0, 255], - "id": 105, - "isthing": 0, - "name": "conveyer belt, conveyor belt, conveyer, conveyor, transporter", - }, - {"color": [0, 255, 92], "id": 106, "isthing": 0, "name": "canopy"}, - { - "color": [184, 0, 255], - "id": 107, - "isthing": 1, - "name": "washer, automatic washer, washing machine", - }, - {"color": [255, 0, 31], "id": 108, "isthing": 1, "name": "plaything, toy"}, - {"color": [0, 184, 255], "id": 109, "isthing": 0, "name": "pool"}, - {"color": [0, 214, 255], "id": 110, "isthing": 1, "name": "stool"}, - {"color": [255, 0, 112], "id": 111, "isthing": 1, "name": "barrel, cask"}, - {"color": [92, 255, 0], "id": 112, "isthing": 1, "name": "basket, handbasket"}, - {"color": [0, 224, 255], "id": 113, "isthing": 0, "name": "falls"}, - {"color": [112, 224, 255], "id": 114, "isthing": 0, "name": "tent"}, - {"color": [70, 184, 160], "id": 115, "isthing": 1, "name": "bag"}, - {"color": [163, 0, 255], "id": 116, "isthing": 1, "name": "minibike, motorbike"}, - {"color": [153, 0, 255], "id": 117, "isthing": 0, "name": "cradle"}, - {"color": [71, 255, 0], "id": 118, "isthing": 1, "name": "oven"}, - {"color": [255, 0, 163], "id": 119, "isthing": 1, "name": "ball"}, - {"color": [255, 204, 0], "id": 120, "isthing": 1, "name": "food, solid food"}, - {"color": [255, 0, 143], "id": 121, "isthing": 1, "name": "step, stair"}, - {"color": [0, 255, 235], "id": 122, "isthing": 0, "name": "tank, storage tank"}, - {"color": [133, 255, 0], "id": 123, "isthing": 1, "name": "trade name"}, - {"color": [255, 0, 235], "id": 124, "isthing": 1, "name": "microwave"}, - {"color": [245, 0, 255], "id": 125, "isthing": 1, "name": "pot"}, - {"color": [255, 0, 122], "id": 126, "isthing": 1, "name": "animal"}, - {"color": [255, 245, 0], "id": 127, "isthing": 1, "name": "bicycle"}, - {"color": [10, 190, 212], "id": 128, "isthing": 0, "name": "lake"}, - {"color": [214, 255, 0], "id": 129, "isthing": 1, "name": "dishwasher"}, - {"color": [0, 204, 255], "id": 130, "isthing": 1, "name": "screen"}, - {"color": [20, 0, 255], "id": 131, "isthing": 0, "name": "blanket, cover"}, - {"color": [255, 255, 0], "id": 132, "isthing": 1, "name": "sculpture"}, - {"color": [0, 153, 255], "id": 133, "isthing": 1, "name": "hood, exhaust hood"}, - {"color": [0, 41, 255], "id": 134, "isthing": 1, "name": "sconce"}, - {"color": [0, 255, 204], "id": 135, "isthing": 1, "name": "vase"}, - {"color": [41, 0, 255], "id": 136, "isthing": 1, "name": "traffic light"}, - {"color": [41, 255, 0], "id": 137, "isthing": 1, "name": "tray"}, - {"color": [173, 0, 255], "id": 138, "isthing": 1, "name": "trash can"}, - {"color": [0, 245, 255], "id": 139, "isthing": 1, "name": "fan"}, - {"color": [71, 0, 255], "id": 140, "isthing": 0, "name": "pier"}, - {"color": [122, 0, 255], "id": 141, "isthing": 0, "name": "crt screen"}, - {"color": [0, 255, 184], "id": 142, "isthing": 1, "name": "plate"}, - {"color": [0, 92, 255], "id": 143, "isthing": 1, "name": "monitor"}, - {"color": [184, 255, 0], "id": 144, "isthing": 1, "name": "bulletin board"}, - {"color": [0, 133, 255], "id": 145, "isthing": 0, "name": "shower"}, - {"color": [255, 214, 0], "id": 146, "isthing": 1, "name": "radiator"}, - {"color": [25, 194, 194], "id": 147, "isthing": 1, "name": "glass, drinking glass"}, - {"color": [102, 255, 0], "id": 148, "isthing": 1, "name": "clock"}, - {"color": [92, 0, 255], "id": 149, "isthing": 1, "name": "flag"}, -] - -ADE20k_COLORS = [k["color"] for k in ADE20K_150_CATEGORIES] - -MetadataCatalog.get("ade20k_sem_seg_train").set( - stuff_colors=ADE20k_COLORS[:], -) - -MetadataCatalog.get("ade20k_sem_seg_val").set( - stuff_colors=ADE20k_COLORS[:], -) - - -def load_ade20k_panoptic_json(json_file, image_dir, gt_dir, semseg_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = ann["image_id"] - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_ade20k_panoptic( - name, metadata, image_root, panoptic_root, semantic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of ADE20k panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - Args: - name (str): the name that identifies a dataset, - e.g. "ade20k_panoptic_train" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_ade20k_panoptic_json( - panoptic_json, image_root, panoptic_root, semantic_root, metadata - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="ade20k_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -_PREDEFINED_SPLITS_ADE20K_PANOPTIC = { - "ade20k_panoptic_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_panoptic_train", - "ADEChallengeData2016/ade20k_panoptic_train.json", - "ADEChallengeData2016/annotations_detectron2/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_panoptic_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_panoptic_val", - "ADEChallengeData2016/ade20k_panoptic_val.json", - "ADEChallengeData2016/annotations_detectron2/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in ADE20K_150_CATEGORIES if k["isthing"] == 1] - stuff_classes = [k["name"] for k in ADE20K_150_CATEGORIES] - stuff_colors = [k["color"] for k in ADE20K_150_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(ADE20K_150_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def register_all_ade20k_panoptic(root): - metadata = get_metadata() - for ( - prefix, - (image_root, panoptic_root, panoptic_json, semantic_root, instance_json), - ) in _PREDEFINED_SPLITS_ADE20K_PANOPTIC.items(): - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_ade20k_panoptic( - prefix, - metadata, - os.path.join(root, image_root), - os.path.join(root, panoptic_root), - os.path.join(root, semantic_root), - os.path.join(root, panoptic_json), - os.path.join(root, instance_json), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_panoptic(_root) diff --git a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/pretrained_example.py b/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/pretrained_example.py deleted file mode 100644 index 63baef08bfa4bf34f52a0cf63e10a0b6783ac316..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan/stylegan_tf/pretrained_example.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Minimal script for generating an image using pre-trained StyleGAN generator.""" - -import os -import pickle -import numpy as np -import PIL.Image -import dnnlib -import dnnlib.tflib as tflib -import config - -def main(): - # Initialize TensorFlow. - tflib.init_tf() - - # Load pre-trained network. - url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl - with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f: - _G, _D, Gs = pickle.load(f) - # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. - # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. - # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. - - # Print network details. - Gs.print_layers() - - # Pick latent vector. - rnd = np.random.RandomState(5) - latents = rnd.randn(1, Gs.input_shape[1]) - - # Generate image. - fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) - images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=fmt) - - # Save image. - os.makedirs(config.result_dir, exist_ok=True) - png_filename = os.path.join(config.result_dir, 'example.png') - PIL.Image.fromarray(images[0], 'RGB').save(png_filename) - -if __name__ == "__main__": - main() diff --git a/spaces/silencewing/server/youyou/css/style.css b/spaces/silencewing/server/youyou/css/style.css deleted file mode 100644 index 2c9b4d44b385cac2f599d55eafcb857b60178c74..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/css/style.css +++ /dev/null @@ -1,62 +0,0 @@ -html { - height: 100%; - background: #d7d0b6 ; - align-items: center; - font-size: larger; - font-family: -apple-system,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif -} -.cal_game { - width: 80vw; - margin: 0 auto; -} - -table { - margin: 0 auto; -} - -th { - width: 20vw; - height: 8vw; - text-align: center; - /* line-height: 40px; */ - border: 1px solid #2b7e71; - font-weight: normal; -} - -td { - width: 50vw; - height: 8vw; - text-align: center; - /* line-height: 40px; */ - border: 1px solid #2b7e71; -} - -/* .btn { - width: 150px; - height: 50px; - color: #2b7e71;; - border: 1px solid #2b7e71; - background-color: transparent; -} - -.btn:hover { - background-color: #2b7e71; - color: #fff; - cursor: pointer; -} */ - -input { - height: 99%; - width: 98.5%; - border: 1px solid #999999; - background-color: transparent; - text-align: center; - touch-action: manipulation; - font-size: large; - -} - -/* input:hover { - border: 3px solid #314834; - -} */ \ No newline at end of file diff --git a/spaces/silentchen/layout-guidance/my_model/unet_2d_condition.py b/spaces/silentchen/layout-guidance/my_model/unet_2d_condition.py deleted file mode 100644 index 2a2665781bbb91f780c6f40c708c945a0d97a955..0000000000000000000000000000000000000000 --- a/spaces/silentchen/layout-guidance/my_model/unet_2d_condition.py +++ /dev/null @@ -1,355 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import pdb -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.utils.checkpoint - -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.modeling_utils import ModelMixin -from diffusers.utils import BaseOutput, logging -from diffusers.models.embeddings import TimestepEmbedding, Timesteps -from .unet_2d_blocks import ( - CrossAttnDownBlock2D, - CrossAttnUpBlock2D, - DownBlock2D, - UNetMidBlock2DCrossAttn, - UpBlock2D, - get_down_block, - get_up_block, -) - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -class UNet2DConditionOutput(BaseOutput): - """ - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Hidden states conditioned on `encoder_hidden_states` input. Output of last layer of model. - """ - - sample: torch.FloatTensor - - -class UNet2DConditionModel(ModelMixin, ConfigMixin): - r""" - UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep - and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - Parameters: - sample_size (`int`, *optional*): The size of the input sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): - The tuple of downsample blocks to use. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)`): - The tuple of upsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "DownBlock2D", - ), - up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"), - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: int = 32, - norm_eps: float = 1e-5, - cross_attention_dim: int = 1280, - attention_head_dim: int = 8, - ): - super().__init__() - - self.sample_size = sample_size - time_embed_dim = block_out_channels[0] * 4 - - # input - self.conv_in = nn.Conv2d(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1)) - - # time - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - - self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - downsample_padding=downsample_padding, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2DCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift="default", - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - resnet_groups=norm_num_groups, - ) - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1) - - def set_attention_slice(self, slice_size): - if slice_size is not None and self.config.attention_head_dim % slice_size != 0: - raise ValueError( - f"Make sure slice_size {slice_size} is a divisor of " - f"the number of heads used in cross_attention {self.config.attention_head_dim}" - ) - if slice_size is not None and slice_size > self.config.attention_head_dim: - raise ValueError( - f"Chunk_size {slice_size} has to be smaller or equal to " - f"the number of heads used in cross_attention {self.config.attention_head_dim}" - ) - - for block in self.down_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - self.mid_block.set_attention_slice(slice_size) - - for block in self.up_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - def set_use_memory_efficient_attention_xformers(self, use_memory_efficient_attention_xformers: bool): - for block in self.down_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_use_memory_efficient_attention_xformers(use_memory_efficient_attention_xformers) - - self.mid_block.set_use_memory_efficient_attention_xformers(use_memory_efficient_attention_xformers) - - for block in self.up_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_use_memory_efficient_attention_xformers(use_memory_efficient_attention_xformers) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (CrossAttnDownBlock2D, DownBlock2D, CrossAttnUpBlock2D, UpBlock2D)): - module.gradient_checkpointing = value - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - Args: - sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs_coarse tensor - timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps - encoder_hidden_states (`torch.FloatTensor`): (batch, channel, height, width) encoder hidden states - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - logger.info("Forward upsample size to force interpolation output size.") - forward_upsample_size = True - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.to(dtype=self.dtype) - emb = self.time_embedding(t_emb) - # 2. pre-process - sample = self.conv_in(sample) - # 3. down - attn_down = [] - down_block_res_samples = (sample,) - for block_idx, downsample_block in enumerate(self.down_blocks): - if hasattr(downsample_block, "attentions") and downsample_block.attentions is not None: - sample, res_samples, cross_atten_prob = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states - ) - attn_down.append(cross_atten_prob) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - # 4. mid - sample, attn_mid = self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states) - - # 5. up - attn_up = [] - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "attentions") and upsample_block.attentions is not None: - sample, cross_atten_prob = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - upsample_size=upsample_size, - ) - attn_up.append(cross_atten_prob) - else: - sample = upsample_block( - hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size - ) - # 6. post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample), attn_up, attn_mid, attn_down diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Descarga gratis Clash of Clans APK y disfruta de sus increbles grficos y jugabilidad.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Descarga gratis Clash of Clans APK y disfruta de sus increbles grficos y jugabilidad.md deleted file mode 100644 index d4cff74cd59c0bcd664e7b2d879f4b6ef027515b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Descarga gratis Clash of Clans APK y disfruta de sus increbles grficos y jugabilidad.md +++ /dev/null @@ -1,149 +0,0 @@ - -

      Clash of Clans APK Descargar: How to Download and Play the Popular Strategy Game on Your Android Device

      -

      If you are a fan of strategy games, you have probably heard of Clash of Clans, one of the most popular and addictive games in the genre. In this game, you can build your own village, raise a clan, and compete in epic clan wars with millions of players worldwide. But did you know that you can also play Clash of Clans on your Android device? In this article, we will show you how to download and play Clash of Clans APK on your Android device, as well as some tips and tricks to help you succeed in the game.

      -

      What is Clash of Clans?

      -

      A brief introduction to the game and its features

      -

      Clash of Clans is a freemium strategy game developed by Supercell, a Finnish game company. It was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has become one of the most downloaded and highest-grossing games in the world, with over 500 million downloads and billions of dollars in revenue.

      -

      clash of clans apk descargar


      Download Filehttps://ssurll.com/2uNRhM



      -

      The game is set in a fantasy world where you can create your own village, train various troops, and join or create a clan with other players. You can also attack other players' villages or defend your own from enemy raids. The game features several modes, such as single-player campaigns, multiplayer battles, clan wars, clan games, events, and more. You can also customize your village with different buildings, decorations, heroes, spells, siege machines, and skins.

      -

      The benefits of playing Clash of Clans on Android

      -

      Playing Clash of Clans on Android has many benefits, such as:

      -
        -
      • You can enjoy the game on a larger screen than your phone.
      • -
      • You can use your Google account to sync your progress across multiple devices.
      • -
      • You can access the Google Play Store to download updates, get rewards, and join communities.
      • -
      • You can use various tools and apps to enhance your gaming experience, such as screen recorders, emulators, modded APKs, etc.
      • -
      • You can play the game offline without an internet connection (although some features may be limited).
      • -
      -

      How to Download Clash of Clans APK on Your Android Device

      -

      The official way: using Google Play Store

      -

      The easiest and safest way to download Clash of Clans APK on your Android device is to use the Google Play Store. Here are the steps:

      -
        -
      1. Open the Google Play Store app on your device.
      2. -
      3. Search for "Clash of Clans" or use this link to go directly to the game page.
      4. -
      5. Tap on the "Install" button and wait for the download to finish.
      6. -
      7. Once the installation is complete, you can open the game and start playing.
      8. -
      -

      Note: You may need to enable the "Unknown sources" option in your device settings to allow the installation of apps from outside the Google Play Store.

      -

      The alternative way: using APKCombo or other third-party sources

      -

      If you cannot access the Google Play Store or want to download an older or modded version of Clash of Clans APK, you can use APKCombo or other third-party sources. APKCombo is a website that provides APK files for various Android apps and games. Here are the steps:

      -
        -
      1. Go to the APKCombo website using your browser.
      2. -
      3. Search for "Clash of Clans" or use this link to go directly to the game page.
      4. -
      5. Select the version of Clash of Clans APK that you want to download and tap on the "Download APK" button.
      6. -
      7. Wait for the download to finish and then open the APK file on your device.
      8. -
      9. Follow the instructions to install the game and grant the necessary permissions.
      10. -
      11. Once the installation is complete, you can open the game and start playing.
      12. -
      -

      Note: Be careful when downloading APK files from third-party sources, as they may contain malware or viruses that can harm your device. Always scan the files with an antivirus app before installing them. Also, make sure that you have enough storage space on your device and a stable internet connection.

      -

      clash of clans apk download for android
      -clash of clans apk mod unlimited gems
      -clash of clans apk latest version 2023
      -clash of clans apk hack 2023
      -clash of clans apk offline
      -clash of clans apk update
      -clash of clans apk free download
      -clash of clans apk old version
      -clash of clans apk mirror
      -clash of clans apk pure
      -clash of clans apk uptodown
      -clash of clans apk android oyun club
      -clash of clans apk no root
      -clash of clans apk revdl
      -clash of clans apk hack download
      -clash of clans apk private server
      -clash of clans apk original
      -clash of clans apk mod menu
      -clash of clans apk unlimited everything
      -clash of clans apk 2023
      -clash of clans apk hack 2022
      -clash of clans apk mod 2022
      -clash of clans apk for pc
      -clash of clans apk mod offline
      -clash of clans apk modded server
      -clash of clans apk hack online
      -clash of clans apk mod unlimited troops
      -clash of clans apk mod unlimited money and gems 2022
      -clash of clans apk mod unlimited gold and elixir 2022
      -clash of clans apk mod unlimited dark elixir 2022
      -clash of clans apk mod unlimited resources 2022
      -clash of clans apk mod unlimited builder base 2022
      -clash of clans apk mod unlimited heroes 2022
      -clash of clans apk mod unlimited spells 2022
      -clash of clans apk mod unlimited siege machines 2022
      -clash of clans apk mod unlimited super troops 2022
      -clash of clans apk mod unlimited clan games rewards 2022
      -clash of clans apk mod unlimited magic items 2022
      -clash of clans apk mod unlimited clan war leagues medals 2022
      -clash of clans apk mod unlimited clan war stars 2022
      -clash of clans apk mod unlimited legend league trophies 2022
      -clash of clans apk mod unlimited builder hall trophies 2022
      -clash of clans apk mod unlimited versus battle wins 2022
      -clash of clans apk mod unlimited achievements 2022
      -clash of clans apk mod unlocked all skins and sceneries 2022
      -clash of clans apk mod unlocked all events and challenges 2022
      -clash of clans apk hack tool no survey no password no human verification
      -clash of clans apk hack tool online generator
      -clash of clans tools pixel crux[^1^] [^2^]

      -

      The precautions and requirements for installing Clash of Clans APK

      -

      Before installing Clash of Clans APK on your Android device, you should take some precautions and check some requirements, such as:

      -
        -
      • Make sure that your device meets the minimum system requirements for running Clash of Clans, which are: Android 4.4 or higher, 2 GB of RAM, and 200 MB of free space.
      • -
      • Make sure that your device has a good battery level or is connected to a charger, as installing Clash of Clans APK may consume a lot of power.
      • -
      • Make sure that you have a backup of your data and progress in Clash of Clans, as installing Clash of Clans APK may overwrite or delete them. You can use your Google account, Facebook account, or Supercell ID to save your progress in the game.
      • -
      • Make sure that you read and agree to the terms and conditions and privacy policy of Clash of Clans before installing Clash of Clans APK, as they may differ from those of the Google Play Store version.
      • -
      -

      How to Play Clash of Clans on Your Android Device

      -

      The basics of building your village, raising your clan, and competing in clan wars

      -

      Once you have installed Clash of Clans APK on your Android device, you can start playing the game by following these basic steps:

      -
        -
      1. Create your own village by placing various buildings, such as town hall, barracks, gold mines, elixir collectors, defenses, walls, etc. You can also upgrade your buildings to improve their functions and appearance.
      2. -
      3. Train various troops, such as barbarians, archers, giants, wizards, dragons, etc. You can also upgrade your troops to increase their strength and skills.
      4. -
      5. Join or create a clan with other players. You can chat with your clan members, donate and request troops, and participate in clan wars and clan games.
      6. -
      7. Attack other players' villages or defend your own from enemy raids. You can use different strategies and combinations of troops, heroes, spells, and siege machines to win battles and loot resources.
      8. -
      9. Earn trophies by winning battles and climb up the ranks in the global and local leaderboards. You can also earn stars by completing achievements and quests.
      10. -
      -

      The tips and tricks for optimizing your strategy, resources, and troops

      -

      To succeed in Clash of Clans on Android, you need to optimize your strategy, resources, and troops. Here are some tips and tricks that can help you:

      -
        -
      • Plan your base layout carefully. You should balance between defense and offense, as well as between resource production and protection. You should also consider the range and coverage of your defenses, the placement of traps and obstacles, and the direction of enemy attacks.
      • -
      • Manage your resources wisely. You should spend your gold and elixir on upgrading your buildings and troops according to your priorities and goals. You should also collect your resources regularly from your mines and collectors, as well as from looting other players' villages.
      • -
      • Choose the best troops for each situation. You should know the strengths and weaknesses of each troop type, as well as their preferred targets and behaviors. You should also match your troops with the appropriate heroes, spells, and siege machines to maximize their effectiveness.
      • -
      • Learn from your mistakes and improve your skills. You should review your attack and defense logs and watch replays of your battles to see what went wrong and what went right. You should also watch videos and guides from other players to learn new strategies and techniques.
      • -
      -

      The challenges and rewards of playing Clash of Clans on Android

      -

      Playing Clash of Clans on Android can be both challenging and rewarding. Here are some of the challenges and rewards that you can expect:

      - - - - - - - - - - - - - - - - - -
      ChallengesRewards
      Keeping up with the updates and changes in the game, such as new features, events, balance changes, etc.Enjoying the fresh and exciting content and gameplay that the game offers, such as new troops, buildings, modes, etc.
      Competing with millions of players around the world, some of whom may have more experience, resources, or skills than you.Earning respect and recognition from your peers and opponents, as well as rewards and bonuses from the game, such as gems, gold, elixir, etc.
      Dealing with technical issues or problems that may occur on your device or in the game, such as crashes, bugs, errors, etc.Getting support and assistance from the developers and the community, as well as solutions and fixes for the issues or problems.
      -

      Conclusion

      -

      A summary of the main points and a call to action for the readers

      -

      In conclusion, Clash of Clans is a fun and addictive strategy game that you can play on your Android device. You can download Clash of Clans APK from the Google Play Store or from third-party sources, depending on your preference and availability. You can also enjoy the game's features, such as building your village, raising your clan, and competing in clan wars. However, you should also be aware of the challenges and rewards that come with playing Clash of Clans on Android. If you are ready to join the clash, download Clash of Clans APK today and start your adventure!

      -

      FAQs

      -

      Q1: Is Clash of Clans free to play on Android?

      -

      A1: Yes, Clash of Clans is free to play on Android. However, the game also offers in-app purchases that can enhance your gaming experience. You can buy gems, gold, elixir, dark elixir, magic items, etc. using real money. However, these purchases are optional and not required to play or progress in the game.

      -

      Q2: Is Clash of Clans compatible with all Android devices?

      -

      A2: No, Clash of Clans is not compatible with all Android devices. The game requires Android 4.4 or higher, 2 GB of RAM, and 200 MB of free space to run smoothly. If your device does not meet these requirements, you may experience performance issues or errors in the game.

      -

      Q3: How can I transfer my Clash of Clans progress from another device to my Android device?

      -

      A3: You can transfer your Clash of Clans progress from another device to your Android device by using your Google account, Facebook account, or Supercell ID. You need to link your game account to one of these options on both devices and then log in with the same option on your Android device. This will sync your progress across multiple devices.

      -

      Q4: How can I join or create a clan in Clash of Clans on Android?

      -

      A4: You can join or create a clan in Clash of Clans on Android by tapping on the clan icon at the bottom left corner of the screen. You can then search for an existing clan that suits your preferences or create your own clan by setting its name, badge, description, requirements, etc. You can also invite or accept other players to join your clan.

      -

      Q5: How can I contact the support team or report a problem in Clash of Clans on Android?

      -

      A5: You can contact the support team or report a problem in Clash of Clans on Android by tapping on the settings icon at the top right corner of the screen. You can then tap on "Help and Support" to access various topics and FAQs that may answer your questions or solve your issues. You can also tap on "Contact Us " to send a message to the support team or report a problem in the game. You can also tap on "Report an Issue" to submit a bug report or a feedback form.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Top War Mod Apk and Conquer the Island World with Your Army.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Top War Mod Apk and Conquer the Island World with Your Army.md deleted file mode 100644 index c63810927e9127498093ff610ab89bed615ce863..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Top War Mod Apk and Conquer the Island World with Your Army.md +++ /dev/null @@ -1,151 +0,0 @@ -
      -

      How to Download and Install Top War: Battle Game Mod APK

      -

      If you are looking for a fun and addictive strategy game that lets you rule the world with your army, you might want to try Top War: Battle Game Mod. This is a modified version of the original game that gives you unlimited money, gems, and resources to upgrade your troops and buildings. You can also enjoy all the features of the game without any ads or restrictions.

      -

      download apk top war battle game mod


      Download File > https://ssurll.com/2uNQJq



      -

      However, you won't find this modded version on the Google Play Store, as it is not an official app. You will need to download and install it manually using an APK file. In this article, we will show you what an APK file is, how to download and install it safely, and what are the benefits and risks of using it.

      -

      What is Top War: Battle Game Mod?

      -

      Top War: Battle Game is a popular strategy game that combines merge mechanics with real-time combat. You can build your base, recruit soldiers, tanks, planes, and ships, and merge them to create more powerful units. You can also join alliances, fight other players, and participate in various events and missions.

      -

      Features of Top War: Battle Game Mod

      -

      The modded version of Top War: Battle Game offers some extra features that make the game more enjoyable and easier. Some of these features are:

      -
        -
      • Unlimited money and gems: You can use these currencies to buy anything you want in the game, such as items, upgrades, skins, and VIP privileges.
      • -
      • Unlimited resources: You can use these resources to build and expand your base, as well as research new technologies and skills.
      • -
      • No ads: You can play the game without any annoying ads or pop-ups.
      • -
      • No root required: You don't need to root your device to install and play the modded version.
      • -
      -

      Requirements for Top War: Battle Game Mod

      -

      To download and install Top War: Battle Game Mod APK, you will need:

      -
        -
      • An Android device running Android 4.4 or higher.
      • -
      • At least 150 MB of free storage space on your device.
      • -
      • A stable internet connection.
      • -
      • A file manager app or a web browser.
      • -
      -

      What is an APK File?

      -

      An APK file is a package file that contains all the components of an Android app. It is similar to other software packages such as APPX in Windows or DEB in Linux. An APK file can be built from source code written in Java or Kotlin.

      -

      Benefits of Using APK Files

      -

      Some of the benefits of using APK files are:

      -

      download apk top war battle game mod unlimited money and gems
      -download apk top war battle game mod latest version
      -download apk top war battle game mod for android
      -download apk top war battle game mod free
      -download apk top war battle game mod offline
      -download apk top war battle game mod no root
      -download apk top war battle game mod 2023
      -download apk top war battle game mod hack
      -download apk top war battle game mod apkjosh[^1^]
      -download apk top war battle game mod mega
      -download apk top war battle game mod obb
      -download apk top war battle game mod rexdl
      -download apk top war battle game mod revdl
      -download apk top war battle game mod unlimited everything
      -download apk top war battle game mod update
      -download apk top war battle game mod online
      -download apk top war battle game mod cheats
      -download apk top war battle game mod android 1
      -download apk top war battle game mod ios
      -download apk top war battle game mod pc
      -download apk top war battle game mod windows 10
      -download apk top war battle game mod mac
      -download apk top war battle game mod bluestacks
      -download apk top war battle game mod gameplay
      -download apk top war battle game mod review
      -download apk top war battle game mod tips and tricks
      -download apk top war battle game mod guide
      -download apk top war battle game mod tutorial
      -download apk top war battle game mod wiki
      -download apk top war battle game mod reddit
      -download apk top war battle game mod forum
      -download apk top war battle game mod discord
      -download apk top war battle game mod facebook
      -download apk top war battle game mod twitter
      -download apk top war battle game mod instagram
      -download apk top war battle game mod youtube
      -download apk top war battle game mod tiktok
      -download apk top war battle game mod pinterest
      -download apk top war battle game mod quora
      -download apk top war battle game mod medium
      -download apk top war battle game mod blogspot
      -download apk top war battle game mod wordpress
      -download apk top war battle game mod wixsite
      -download apk top war battle game mod squarespace
      -download apk top war battle game mod shopify
      -download apk top war battle game mod google play store
      -download apk top war battle game mod amazon appstore
      -download apk top war battle game mod apkpure
      -download apk top war battle game mod uptodown

      -
        -
      • You can access apps that are not available on the Google Play Store: Some apps may be removed from the Play Store due to various reasons, such as policy violations, regional restrictions, or legal issues. You can still download and install these apps using their APK files from other sources.
      • -
      • You can get the latest updates before they are officially released: Some app developers may release beta versions or test versions of their apps before they update them on the Play Store. You can download these versions using their APK files and enjoy the new features before anyone else.
      • -
      • You can install older versions of apps: Sometimes, you may prefer an older version of an app over a newer one, because it has fewer bugs, more features, or better compatibility. You can download and install these versions using their APK files and avoid the unwanted changes.
      • -
      • You can customize your apps: Some apps may have modded versions that offer extra features, such as unlimited money, unlocked items, or ad-free experience. You can download and install these versions using their APK files and enjoy the enhanced gameplay.
      • -
      -

      Risks of Using APK Files

      -

      Some of the risks of using APK files are:

      -
        -
      • You may download malware or viruses: Some APK files may contain malicious code that can harm your device or steal your personal information. You should always download APK files from trusted and reputable sources, and scan them with an antivirus app before installing them.
      • -
      • You may violate the terms and conditions of the app developers: Some app developers may not allow you to use their apps outside the Google Play Store, or modify their apps in any way. You may face legal consequences or lose access to their services if you violate their terms and conditions.
      • -
      • You may encounter compatibility or performance issues: Some APK files may not work properly on your device, or cause crashes, glitches, or errors. You should always check the compatibility and requirements of the APK files before installing them, and backup your data before making any changes.
      • -
      -

      How to Download Top War: Battle Game Mod APK

      -

      There are two ways to download Top War: Battle Game Mod APK: from a trusted website or from your computer. Here are the steps for each method:

      -

      From a Trusted Website

      -
        -
      1. Open your web browser and go to a website that offers Top War: Battle Game Mod APK. Some examples are APKPure, APKFab, and APKMody.
      2. -
      3. Search for Top War: Battle Game Mod APK and select the latest version.
      4. -
      5. Tap on the download button and wait for the file to be downloaded.
      6. -
      7. Once the download is complete, tap on the file to open it.
      8. -
      -

      From Your Computer

      -
        -
      1. Open your web browser and go to a website that offers Top War: Battle Game Mod APK. Some examples are APKPure, APKFab, and APKMody.
      2. -
      3. Search for Top War: Battle Game Mod APK and select the latest version.
      4. -
      5. Click on the download button and save the file to your computer.
      6. -
      7. Connect your Android device to your computer using a USB cable.
      8. -
      9. Copy the file from your computer to your device's storage.
      10. -
      11. Disconnect your device from your computer.
      12. -
      13. Open your file manager app and locate the file on your device.
      14. -
      15. Tap on the file to open it.
      16. -
      -

      How to Install Top War: Battle Game Mod APK

      -

      To install Top War: Battle Game Mod APK, you will need to enable unknown sources on your device, use a file manager app or a web browser, and follow the installation steps. Here are the details for each step:

      -

      Enable Unknown Sources

      -

      To allow your device to install apps from sources other than the Google Play Store, you will need to enable unknown sources. This is a security feature that prevents unauthorized or harmful apps from being installed on your device. To enable unknown sources, follow these steps:

      -
        -
      1. Go to your device's settings and tap on security or privacy.
      2. -
      3. Find the option that says unknown sources or install unknown apps and toggle it on.
      4. -
      5. A warning message will appear. Read it carefully and tap on OK or allow.
      6. -
      -

      Use a File Manager App or a Web Browser

      -

      To open the APK file that you have downloaded, you will need to use a file manager app or a web browser. A file manager app is an app that lets you access and manage the files on your device. A web browser is an app that lets you browse the internet. To use a file manager app or a web browser, follow these steps:

      -
        -
      1. If you have downloaded the APK file from a trusted website, open your web browser and go to your downloads folder. If you have copied the APK file from your computer, open your file manager app and go to the folder where you have saved the file.
      2. -
      3. Tap on the APK file to open it. A pop-up window will appear, asking you to confirm the installation.
      4. -
      5. Tap on install and wait for the installation to complete.
      6. -
      7. Once the installation is done, tap on open to launch the app or done to exit the window.
      8. -
      -

      Follow the Installation Steps

      -

      To complete the installation of Top War: Battle Game Mod APK, you will need to follow the steps that appear on your screen. These steps may vary depending on the app and your device, but they usually include:

      -
        -
      1. Accepting the permissions that the app requires to function properly.
      2. -
      3. Choosing the language and region that you prefer.
      4. -
      5. Signing in with your account or creating a new one.
      6. -
      7. Customizing your settings and preferences.
      8. -
      9. Enjoying the game!
      10. -
      -

      Conclusion

      -

      Top War: Battle Game Mod APK is a great way to enjoy a fun and addictive strategy game with unlimited money, gems, and resources. You can download and install it easily using an APK file from a trusted website or your computer. However, you should also be aware of the risks of using APK files, such as malware, legal issues, or compatibility problems. You should always scan the APK files with an antivirus app before installing them, and backup your data before making any changes. We hope this article has helped you learn how to download and install Top War: Battle Game Mod APK safely and successfully. Have fun ruling the world with your army!

      -

      FAQs

      -

      Here are some frequently asked questions about Top War: Battle Game Mod APK:

      -

      Q: Is Top War: Battle Game Mod APK free?

      -

      A: Yes, Top War: Battle Game Mod APK is free to download and install. You don't need to pay anything to use it.

      -

      Q: Is Top War: Battle Game Mod APK safe?

      -

      A: Top War: Battle Game Mod APK is safe if you download it from a trusted and reputable source, and scan it with an antivirus app before installing it. However, you should also be careful of the risks of using APK files, such as malware, legal issues, or compatibility problems.

      -

      Q: Is Top War: Battle Game Mod APK legal?

      -

      A: Top War: Battle Game Mod APK is not an official app, and it may violate the terms and conditions of the app developers. You may face legal consequences or lose access to their services if you use it. You should use it at your own risk and responsibility.

      -

      Q: Does Top War: Battle Game Mod APK work on all devices?

      -

      A: Top War: Battle Game Mod APK works on most Android devices running Android 4.4 or higher. However, some devices may not support it or have compatibility or performance issues. You should check the compatibility and requirements of the APK file before installing it, and backup your data before making any changes.

      -

      Q: How can I update Top War: Battle Game Mod APK?

      -

      A: To update Top War: Battle Game Mod APK, you will need to download and install the latest version of the APK file from a trusted website or your computer. You should also uninstall the previous version of the app before installing the new one.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/prepare_person.py b/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/prepare_person.py deleted file mode 100644 index 3df771ff14502a680b4f20abd4856ff74a54e058..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/prepare_person.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -from shutil import copyfile - -# You only need to change this line to your dataset download path -download_path = './Market-1501-v15.09.15' - -if not os.path.isdir(download_path): - print('please change the download_path') - -save_path = download_path + '/pytorch' -if not os.path.isdir(save_path): - os.mkdir(save_path) -#----------------------------------------- -#query -query_path = download_path + '/query' -query_save_path = download_path + '/pytorch/query' -if not os.path.isdir(query_save_path): - os.mkdir(query_save_path) - -for root, dirs, files in os.walk(query_path, topdown=True): - for name in files: - if not name[-3:]=='jpg': - continue - ID = name.split('_') - src_path = query_path + '/' + name - dst_path = query_save_path + '/' + ID[0] - if not os.path.isdir(dst_path): - os.mkdir(dst_path) - copyfile(src_path, dst_path + '/' + name) - -#----------------------------------------- -#multi-query -query_path = download_path + '/gt_bbox' -# for dukemtmc-reid, we do not need multi-query -if os.path.isdir(query_path): - query_save_path = download_path + '/pytorch/multi-query' - if not os.path.isdir(query_save_path): - os.mkdir(query_save_path) - - for root, dirs, files in os.walk(query_path, topdown=True): - for name in files: - if not name[-3:]=='jpg': - continue - ID = name.split('_') - src_path = query_path + '/' + name - dst_path = query_save_path + '/' + ID[0] - if not os.path.isdir(dst_path): - os.mkdir(dst_path) - copyfile(src_path, dst_path + '/' + name) - -#----------------------------------------- -#gallery -gallery_path = download_path + '/bounding_box_test' -gallery_save_path = download_path + '/pytorch/gallery' -if not os.path.isdir(gallery_save_path): - os.mkdir(gallery_save_path) - -for root, dirs, files in os.walk(gallery_path, topdown=True): - for name in files: - if not name[-3:]=='jpg': - continue - ID = name.split('_') - src_path = gallery_path + '/' + name - dst_path = gallery_save_path + '/' + ID[0] - if not os.path.isdir(dst_path): - os.mkdir(dst_path) - copyfile(src_path, dst_path + '/' + name) - -#--------------------------------------- -#train_all -train_path = download_path + '/bounding_box_train' -train_save_path = download_path + '/pytorch/train_all' -if not os.path.isdir(train_save_path): - os.mkdir(train_save_path) - -for root, dirs, files in os.walk(train_path, topdown=True): - for name in files: - if not name[-3:]=='jpg': - continue - ID = name.split('_') - src_path = train_path + '/' + name - dst_path = train_save_path + '/' + ID[0] - if not os.path.isdir(dst_path): - os.mkdir(dst_path) - copyfile(src_path, dst_path + '/' + name) - - -#--------------------------------------- -#train_val -train_path = download_path + '/bounding_box_train' -train_save_path = download_path + '/pytorch/train' -val_save_path = download_path + '/pytorch/test' -if not os.path.isdir(train_save_path): - os.mkdir(train_save_path) - os.mkdir(val_save_path) - -for root, dirs, files in os.walk(train_path, topdown=True): - for name in files: - if not name[-3:]=='jpg': - continue - ID = name.split('_') - src_path = train_path + '/' + name - dst_path = train_save_path + '/' + ID[0] - if not os.path.isdir(dst_path): - os.mkdir(dst_path) - dst_path = val_save_path + '/' + ID[0] #first image is used as val image - os.mkdir(dst_path) - copyfile(src_path, dst_path + '/' + name) \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/translate/finetune_deltalm.sh b/spaces/skf15963/summary/fengshen/examples/translate/finetune_deltalm.sh deleted file mode 100644 index 6d6bd9ef5fde6c9afd2957b79118e13b4e94d8da..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/translate/finetune_deltalm.sh +++ /dev/null @@ -1,115 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=mbart_en_zh -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=8 -#SBATCH --gres=gpu:8 # number of gpus -#SBATCH --cpus-per-task=32 -#SBATCH -o %x-%j.log - -set -x -e - -echo "START TIME: $(date)" - -MODEL_NAME=deltalm_en_zh -MICRO_BATCH_SIZE=16 -ROOT_DIR=../../workspace -MODEL_ROOT_DIR=$ROOT_DIR/${MODEL_NAME} - - -if [ ! -d ${MODEL_ROOT_DIR} ];then - mkdir ${MODEL_ROOT_DIR} - echo ${MODEL_ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${MODEL_ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -output_save_path=${MODEL_ROOT_DIR}.json -if [ -f ${output_save_path} ];then - echo ${output_save_path} exist, rm it!!!!!!!!!!!!!!!!! - rm ${output_save_path} -fi - -ZERO_STAGE=1 - -config_json="${MODEL_ROOT_DIR}/ds_config.${MODEL_NAME}.json" - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 1000, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - - -TRAINER_ARGS=" - --max_epochs 20 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy deepspeed_stage_${ZERO_STAGE} \ - --default_root_dir ${MODEL_ROOT_DIR} \ - --save_ckpt_path ${MODEL_ROOT_DIR}/ckpt \ - --save_top_k 3 \ - --monitor valid_sacrebleu \ - --mode max \ - --save_last \ - --every_n_train_steps 0 \ - --val_check_interval 0.2 \ - --label_smoothing 0.1 \ - --warmup_steps 4000 \ - --learning_rate 1e-7 \ - --adam_beta2 0.98 \ - --scheduler_type inverse_sqrt \ - --reverse_src_tgt \ - --tgt_zh \ -" - -DATA_ARGS=" - --datasets_name case_test \ - --num_workers 8 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --val_datasets_field val \ - --max_enc_length 256 \ - --max_dec_length 256 \ -" - -mode_path="IDEA-CCNL/Randeng-Deltalm-362M-En-Zn" - - -MODEL_ARGS=" - --model_path $mode_path \ - --output_save_path $output_save_path \ -" - -SCRIPTS_PATH=finetune_deltalm.py - -cat $SCRIPTS_PATH - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD - -source activate -conda activate fengshen -# srun python3 $CMD -python3 $CMD diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md b/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md deleted file mode 100644 index 04f3f15d3ed391e26ca87f726ae88f30d1d414ab..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -name: ❓ Questions/Help -about: If you have questions, please first search existing issues and docs -labels: 'question, needs triage' ---- - -## ❓ Questions and Help - -### Before asking: -1. search the issues. -2. search the docs. - - - -#### What is your question? - -#### Code - - - -#### What have you tried? - -#### What's your environment? - - - fairseq Version (e.g., 1.0 or main): - - PyTorch Version (e.g., 1.0) - - OS (e.g., Linux): - - How you installed fairseq (`pip`, source): - - Build command you used (if compiling from source): - - Python version: - - CUDA/cuDNN version: - - GPU models and configuration: - - Any other relevant information: diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py deleted file mode 100644 index d6cf06e5872cb86e5c2e726153c7a80c78db9d1e..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..ops import emulate_int - - -class IntEmbedding(nn.Module): - """ - Quantized counterpart of the nn.Embedding module that applies QuantNoise during training. - - Args: - - num_embeddings: number of tokens - - embedding_dim: embedding dimension - - p: amount of noise to inject (0 = no quantization, 1 = quantize all the weights) - - bits: number of bits - - method: choose among {"tensor", "histogram", "channel"} - - update_step: recompute scale and zero_point every update_steps iterations - - Remarks: - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - At test time, the weights are fully quantized - """ - - def __init__( - self, - num_embeddings, - embedding_dim, - padding_idx=None, - max_norm=None, - norm_type=2.0, - scale_grad_by_freq=False, - sparse=False, - _weight=None, - p=0, - update_step=1000, - bits=8, - method="histogram", - ): - super(IntEmbedding, self).__init__() - self.num_embeddings = num_embeddings - self.embedding_dim = embedding_dim - if padding_idx is not None: - if padding_idx > 0: - assert ( - padding_idx < self.num_embeddings - ), "Padding_idx must be within num_embeddings" - elif padding_idx < 0: - assert ( - padding_idx >= -self.num_embeddings - ), "Padding_idx must be within num_embeddings" - padding_idx = self.num_embeddings + padding_idx - self.padding_idx = padding_idx - self.max_norm = max_norm - self.norm_type = norm_type - self.scale_grad_by_freq = scale_grad_by_freq - if _weight is None: - self.weight = nn.Parameter(torch.Tensor(num_embeddings, embedding_dim)) - self.reset_parameters() - else: - assert list(_weight.shape) == [ - num_embeddings, - embedding_dim, - ], "Shape of weight does not match num_embeddings and embedding_dim" - self.weight = nn.Parameter(_weight) - self.sparse = sparse - - # quantization parameters - self.p = p - self.bits = bits - self.method = method - self.update_step = update_step - self.counter = 0 - - def reset_parameters(self): - nn.init.normal_(self.weight) - if self.padding_idx is not None: - with torch.no_grad(): - self.weight[self.padding_idx].fill_(0) - - def forward(self, input): - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.training else 1 - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # quantize weight - weight_quantized, self.scale, self.zero_point = emulate_int( - self.weight.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(self.weight) - mask.bernoulli_(1 - p) - noise = (weight_quantized - self.weight).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - weight = ( - torch.clamp(self.weight, clamp_low.item(), clamp_high.item()) - + noise.detach() - ) - - # return output - output = F.embedding( - input, - weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - return output - - def extra_repr(self): - s = "{num_embeddings}, {embedding_dim}" - if self.padding_idx is not None: - s += ", padding_idx={padding_idx}" - if self.max_norm is not None: - s += ", max_norm={max_norm}" - if self.norm_type != 2: - s += ", norm_type={norm_type}" - if self.scale_grad_by_freq is not False: - s += ", scale_grad_by_freq={scale_grad_by_freq}" - if self.sparse is not False: - s += ", sparse=True" - s += "quant_noise={p}, bits={bits}, method={method}" - return s.format(**self.__dict__) diff --git a/spaces/stomexserde/gpt4-ui/Examples/ABCD - Any Body Can Dance 2 Download Kickass 720p Hd.md b/spaces/stomexserde/gpt4-ui/Examples/ABCD - Any Body Can Dance 2 Download Kickass 720p Hd.md deleted file mode 100644 index c59665539f7486aa1699cef7a0d987feecb916ec..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/ABCD - Any Body Can Dance 2 Download Kickass 720p Hd.md +++ /dev/null @@ -1,19 +0,0 @@ -
      -

      How to Download ABCD - Any Body Can Dance 2 in HD Quality

      -

      If you are a fan of dance movies, you might be interested in downloading ABCD - Any Body Can Dance 2, a 2015 Indian Hindi-language film directed by Remo D'Souza and starring Prabhu Deva, Varun Dhawan and Shraddha Kapoor. The film is a sequel to the 2013 film ABCD: Any Body Can Dance and follows the story of a dance troupe that seeks to redeem itself by winning the World Hip-Hop Dance Championship in Las Vegas after being accused of cheating in a reality TV show.

      -

      ABCD - Any Body Can Dance 2 download kickass 720p hd


      DOWNLOAD ————— https://urlgoal.com/2uIbM0



      -

      Downloading ABCD - Any Body Can Dance 2 in HD quality is not difficult if you follow these simple steps:

      -
        -
      1. Find a reliable torrent site that offers the movie in kickass 720p resolution. Some examples are The Pirate Bay, Kickass Torrents and RARBG. Make sure you have a VPN or proxy service to access these sites safely and anonymously.
      2. -
      3. Search for the movie title on the torrent site and choose the file that has the most seeders and leechers. This will ensure faster download speed and better quality. You can also check the comments and ratings of other users to verify the authenticity and quality of the file.
      4. -
      5. Download a torrent client software such as BitTorrent, uTorrent or Vuze. Install it on your device and open the torrent file you downloaded from the torrent site. The torrent client will start downloading the movie to your device.
      6. -
      7. Once the download is complete, you can enjoy watching ABCD - Any Body Can Dance 2 in HD quality on your device. You can also transfer it to other devices or burn it to a DVD if you wish.
      8. -
      -

      Note: Downloading movies from torrent sites may be illegal in some countries and may expose you to malware and viruses. We do not condone or encourage piracy and recommend that you watch movies legally from official sources such as Netflix, Amazon Prime Video or Disney+ Hotstar.

      - -

      ABCD - Any Body Can Dance 2 is not just a movie about dance, but also about friendship, passion, patriotism and redemption. The film showcases the journey of a group of dancers who overcome their personal and professional challenges to achieve their dreams. The film also pays tribute to the real-life dance crew The Kings, who inspired the story and won the World Hip-Hop Dance Championship in 2019.

      -

      The film has received mixed to positive reviews from critics and audiences alike. Some praised the film for its stunning choreography, production design, music and performances, especially by Prabhu Deva, Varun Dhawan and Shraddha Kapoor. Others criticized the film for its weak script, predictable plot, clichéd dialogues and excessive length. The film was also compared unfavorably to its predecessor ABCD: Any Body Can Dance, which was considered more original and innovative.

      -

      Despite the mixed reviews, ABCD - Any Body Can Dance 2 was a commercial success at the box office. It became the highest-grossing dance film in India and one of the highest-grossing Bollywood films of 2015. It also received several nominations and awards for its music and choreography at various ceremonies. The film has a cult following among dance enthusiasts and fans of the genre.

      -

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/FlixGrab Premium 4.4.3.419 Netflix Downloader Crack 41.2 MB.md b/spaces/stomexserde/gpt4-ui/Examples/FlixGrab Premium 4.4.3.419 Netflix Downloader Crack 41.2 MB.md deleted file mode 100644 index 5a684e60ca07ce80388fae97d91ed96fceacab7e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/FlixGrab Premium 4.4.3.419 Netflix Downloader Crack 41.2 MB.md +++ /dev/null @@ -1,44 +0,0 @@ - -

      FlixGrab Premium 4.4.3.419: The Best Netflix Downloader Crack for Windows

      -

      If you are looking for a way to download Netflix videos to your PC and watch them offline, you need FlixGrab Premium 4.4.3.419. This is the latest version of the popular Netflix downloader crack that allows you to save any Netflix movie, show, or documentary in high quality and with subtitles.

      -

      FlixGrab Premium 4.4.3.419 Netflix Downloader Crack | 41.2 MB


      Download ››› https://urlgoal.com/2uI7Xf



      -

      In this article, we will show you how to use FlixGrab Premium 4.4.3.419 to download Netflix videos easily and quickly. We will also provide you with a link to download the crack file for free and activate the full version of the software.

      -

      What is FlixGrab Premium 4.4.3.419?

      -

      FlixGrab Premium 4.4.3.419 is a powerful and user-friendly application that lets you download Netflix videos to your PC in MP4 or MKV format. You can choose the video quality, resolution, audio track, and subtitles according to your preferences.

      -

      FlixGrab Premium 4.4.3.419 supports downloading multiple videos at the same time with the batch mode feature. You can also pause and resume the downloads at any time.

      -

      FlixGrab Premium 4.4.3.419 is compatible with Windows 7, 8, 10, and Vista (32-bit and 64-bit). It requires a stable Internet connection and a valid Netflix account to access the content.

      -

      -

      How to Use FlixGrab Premium 4.4.3.419 to Download Netflix Videos?

      -

      Using FlixGrab Premium 4.4.3.419 to download Netflix videos is very easy and fast. Here are the steps you need to follow:

      -
        -
      1. Download and install FlixGrab Premium 4.4.3.419 from the link below.
      2. -
      3. Run the program and enter your Netflix login credentials.
      4. -
      5. Copy the URL of the Netflix video you want to download and paste it into the program.
      6. -
      7. Select the video quality, resolution, audio track, and subtitles you want.
      8. -
      9. Click on the "Download" button and wait for the process to complete.
      10. -
      11. Enjoy watching your downloaded Netflix videos offline on your PC or any other device.
      12. -
      -

      How to Download FlixGrab Premium 4.4.3.419 Crack for Free?

      -

      If you want to enjoy the full features of FlixGrab Premium 4.4.3.419 without paying anything, you can download the crack file from the link below.

      -

      The crack file is a small program that bypasses the activation process of FlixGrab Premium 4.4.3.419 and unlocks all its functions.

      -

      To use the crack file, you need to follow these steps:

      -
        -
      1. Download the crack file from the link below.
      2. -
      3. Extract the zip file and run the crack.exe file as administrator.
      4. -
      5. Click on the "Crack" button and wait for a few seconds.
      6. -
      7. Launch FlixGrab Premium 4.4.3.419 and enjoy downloading unlimited Netflix videos for free.
      8. -
      -

      Download Links

      -

      Here are the download links for FlixGrab Premium 4.4.3.419 and its crack file:

      - - -FlixGrab Premium 4.4.3 Screenshot - -

      Conclusion

      - -

      FlixGrab Premium 4.4.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ishq Click Dual Audio Hindi Download REPACK.md b/spaces/stomexserde/gpt4-ui/Examples/Ishq Click Dual Audio Hindi Download REPACK.md deleted file mode 100644 index ac875e2e4bd022ad1cc7145d2a3ca0db2f1aba54..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ishq Click Dual Audio Hindi Download REPACK.md +++ /dev/null @@ -1,31 +0,0 @@ - -

      Ishq Click Dual Audio Hindi Download: How to Watch the Romantic Drama Online

      - -

      If you are looking for a romantic drama that will make you feel all the emotions, then you might want to check out Ishq Click. This 2016 film stars Adhyayan Suman and Sara Loren as two aspiring artists who fall in love in Mumbai. However, their relationship is tested by fame, jealousy and betrayal. Ishq Click is a remake of the 2004 Thai film Shutter, which was also adapted into a Hollywood horror movie in 2008.

      - -

      But how can you watch Ishq Click online? Well, if you are looking for a dual audio version of the film, with both Hindi and English subtitles, then you have come to the right place. In this article, we will show you how to download Ishq Click dual audio Hindi from various sources. We will also give you some tips on how to optimize your viewing experience and avoid any legal issues.

      -

      Ishq Click Dual Audio Hindi Download


      Downloadhttps://urlgoal.com/2uI7PY



      - -

      Where to Download Ishq Click Dual Audio Hindi

      - -

      There are many websites that offer Ishq Click dual audio Hindi download, but not all of them are safe or legal. Some of them may contain malware, viruses or spyware that can harm your device or steal your personal information. Others may violate the copyright laws and put you at risk of legal action or fines. Therefore, it is important to be careful and choose a reliable source for downloading Ishq Click dual audio Hindi.

      - -

      One of the best options is to use a torrent site that has a good reputation and a large number of seeders. Torrent sites allow you to download files from other users who have already downloaded them. This way, you can get Ishq Click dual audio Hindi download faster and more efficiently. However, you will need a torrent client software to access these sites and download the files. Some of the most popular torrent clients are uTorrent, BitTorrent and Vuze.

      - -

      Once you have installed a torrent client, you can search for Ishq Click dual audio Hindi download on any torrent site that you prefer. Some of the most popular torrent sites are The Pirate Bay, 1337x and RARBG. However, these sites may be blocked or restricted in some countries due to legal issues. In that case, you can use a VPN service to bypass the geo-restrictions and access these sites anonymously.

      - -

      A VPN service is a software that encrypts your internet traffic and routes it through a server in another country. This way, you can hide your IP address and location from your ISP and the authorities. You can also access any website that is blocked or censored in your region. Some of the best VPN services are ExpressVPN, NordVPN and Surfshark.

      - -

      How to Watch Ishq Click Dual Audio Hindi Online

      - -

      After downloading Ishq Click dual audio Hindi from a torrent site, you can watch it online on any device that supports video playback. However, you may need to convert the file format or adjust the settings to optimize your viewing experience. Here are some tips on how to watch Ishq Click dual audio Hindi online:

      - -
        -
      • Convert the file format: If the file format of Ishq Click dual audio Hindi download is not compatible with your device or media player, you may need to convert it to a more suitable format. You can use an online converter tool like Zamzar or CloudConvert to do this. Just upload the file, choose the output format and download the converted file.
      • -
      • Adjust the settings: If the file size or quality of Ishq Click dual audio Hindi download is too high or low for your device or internet speed, you may need to adjust the settings to improve your viewing experience. You can use a video editing software like VLC Media Player or HandBrake to do this. Just open the file, go to the settings menu and change the parameters like resolution, bitrate, frame rate and audio channels.
      • -
      • Select the language: If you want to watch Ishq Click dual audio Hindi online with subtitles or dubbing, you may need to select the language option that suits your preference. You can use a subtitle downloader tool like Subscene or OpenSubtitles to find and download subtitles for Ishq Click dual audio Hindi. Just search for the movie title and language and download the subtitle file. Then, you can use a media player like VLC Media Player or MX Player to load the subtitle file and sync it with the video.
      • -
      - - cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/postprocessing.py b/spaces/supertori/files/stable-diffusion-webui/modules/postprocessing.py deleted file mode 100644 index 21e32af9866abfc02288f9f04a5195f1700de1b0..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/postprocessing.py +++ /dev/null @@ -1,103 +0,0 @@ -import os - -from PIL import Image - -from modules import shared, images, devices, scripts, scripts_postprocessing, ui_common, generation_parameters_copypaste -from modules.shared import opts - - -def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, show_extras_results, *args, save_output: bool = True): - devices.torch_gc() - - shared.state.begin() - shared.state.job = 'extras' - - image_data = [] - image_names = [] - outputs = [] - - if extras_mode == 1: - for img in image_folder: - image = Image.open(img) - image_data.append(image) - image_names.append(os.path.splitext(img.orig_name)[0]) - elif extras_mode == 2: - assert not shared.cmd_opts.hide_ui_dir_config, '--hide-ui-dir-config option must be disabled' - assert input_dir, 'input directory not selected' - - image_list = shared.listfiles(input_dir) - for filename in image_list: - try: - image = Image.open(filename) - except Exception: - continue - image_data.append(image) - image_names.append(filename) - else: - assert image, 'image not selected' - - image_data.append(image) - image_names.append(None) - - if extras_mode == 2 and output_dir != '': - outpath = output_dir - else: - outpath = opts.outdir_samples or opts.outdir_extras_samples - - infotext = '' - - for image, name in zip(image_data, image_names): - shared.state.textinfo = name - - existing_pnginfo = image.info or {} - - pp = scripts_postprocessing.PostprocessedImage(image.convert("RGB")) - - scripts.scripts_postproc.run(pp, args) - - if opts.use_original_name_batch and name is not None: - basename = os.path.splitext(os.path.basename(name))[0] - else: - basename = '' - - infotext = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in pp.info.items() if v is not None]) - - if opts.enable_pnginfo: - pp.image.info = existing_pnginfo - pp.image.info["postprocessing"] = infotext - - if save_output: - images.save_image(pp.image, path=outpath, basename=basename, seed=None, prompt=None, extension=opts.samples_format, info=infotext, short_filename=True, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, forced_filename=None) - - if extras_mode != 2 or show_extras_results: - outputs.append(pp.image) - - devices.torch_gc() - - return outputs, ui_common.plaintext_to_html(infotext), '' - - -def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True): - """old handler for API""" - - args = scripts.scripts_postproc.create_args_for_run({ - "Upscale": { - "upscale_mode": resize_mode, - "upscale_by": upscaling_resize, - "upscale_to_width": upscaling_resize_w, - "upscale_to_height": upscaling_resize_h, - "upscale_crop": upscaling_crop, - "upscaler_1_name": extras_upscaler_1, - "upscaler_2_name": extras_upscaler_2, - "upscaler_2_visibility": extras_upscaler_2_visibility, - }, - "GFPGAN": { - "gfpgan_visibility": gfpgan_visibility, - }, - "CodeFormer": { - "codeformer_visibility": codeformer_visibility, - "codeformer_weight": codeformer_weight, - }, - }) - - return run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, show_extras_results, *args, save_output=save_output) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ESET.NOD32.Antivirus.4.0.314.-.mara-fix.1.1.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ESET.NOD32.Antivirus.4.0.314.-.mara-fix.1.1.md deleted file mode 100644 index 91b717408590c971f2a7d8bae4a7639c53c777ed..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/ESET.NOD32.Antivirus.4.0.314.-.mara-fix.1.1.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ESET.NOD32.Antivirus.4.0.314.-.mara-fix.1.1


      Download 🆓 https://cinurl.com/2uEYdB



      -
      -Title ESET NOD32 Antivirus & Smart Security 4.0.314 x32 & x64 + Fixes; Uploaded 11 years ago; Last Checked 2 years ago; Size 33 MB; Uploader vetrk; Tags ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mplab C18 Full Crack _BEST_ Kid.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mplab C18 Full Crack _BEST_ Kid.md deleted file mode 100644 index 60cba8a93a82e7e557089130cd6eb4dabf1fa9df..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mplab C18 Full Crack _BEST_ Kid.md +++ /dev/null @@ -1,6 +0,0 @@ -

      mplab c18 full crack kid


      Download Zip ———>>> https://cinurl.com/2uEZ0i



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Security Monitor Pro 5.42 50.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Security Monitor Pro 5.42 50.md deleted file mode 100644 index 6b900ce2ee8dbdf48a8a995544addb90147c5a3c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Security Monitor Pro 5.42 50.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Security Monitor Pro 5.42 50


      Download Filehttps://cinurl.com/2uEXxy



      - - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Asecondchancefullmovietagalogdownload !!EXCLUSIVE!!.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Asecondchancefullmovietagalogdownload !!EXCLUSIVE!!.md deleted file mode 100644 index bfd03149981c7af6c648281bd3b464a70ddd7811..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Asecondchancefullmovietagalogdownload !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      asecondchancefullmovietagalogdownload


      Download »»» https://urluss.com/2uCHep



      - -asecondchancefullmovietagalogdownload · ewqlso gold edition authorization keygen · EaseUS Data Recovery Wizard 11.9.0 Keygen - [CrackzSoft] download. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/t13718236382/bingoGPT4/src/lib/bots/bing/sr.ts b/spaces/t13718236382/bingoGPT4/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/taiwhis/Nhandien_nhom36/line_segment.py b/spaces/taiwhis/Nhandien_nhom36/line_segment.py deleted file mode 100644 index 2419e9895fbbaa6a01178caf9ebabaa80055f325..0000000000000000000000000000000000000000 --- a/spaces/taiwhis/Nhandien_nhom36/line_segment.py +++ /dev/null @@ -1,140 +0,0 @@ -from skimage.io import imread -from skimage.color import rgba2rgb, rgb2gray -from skimage.filters import threshold_otsu -import numpy as np -import cv2 - -#mã nguồn của thuật toán line seg ment A * -from heapq import * - -# Hàm tìm các đỉnh peaks trong mảng hpp theo ngưỡng threshold -def find_peak_regions(hpp, threshold): - - peaks = [] - for i, hppv in enumerate(hpp): - if hppv < threshold: - peaks.append([i, hppv]) - return peaks - -# Hàm heuristic sử dụng để tính ước lượng khoảng cách giữa 2 điểm trong thuật toán A* -def heuristic(a, b): - return (b[0] - a[0]) ** 2 + (b[1] - a[1]) ** 2 - -# Hàm thực hiện thuật toán A* để tìm đường đi giữa 2 điểm trong mảng 2D -def astar(array, start, goal): - - neighbors = [(0,1),(0,-1),(1,0),(-1,0),(1,1),(1,-1),(-1,1),(-1,-1)] - close_set = set() - came_from = {} - gscore = {start:0} - fscore = {start:heuristic(start, goal)} - oheap = [] - - heappush(oheap, (fscore[start], start)) - - while oheap: - - current = heappop(oheap)[1] - - if current == goal: - data = [] - while current in came_from: - data.append(current) - current = came_from[current] - return data - - close_set.add(current) - for i, j in neighbors: - neighbor = current[0] + i, current[1] + j - tentative_g_score = gscore[current] + heuristic(current, neighbor) - if 0 <= neighbor[0] < array.shape[0]: - if 0 <= neighbor[1] < array.shape[1]: - if array[neighbor[0]][neighbor[1]] == 1: - continue - else: - # array bound y walls - continue - else: - # array bound x walls - continue - - if neighbor in close_set and tentative_g_score >= gscore.get(neighbor, 0): - continue - - if tentative_g_score < gscore.get(neighbor, 0) or neighbor not in [i[1]for i in oheap]: - came_from[neighbor] = current - gscore[neighbor] = tentative_g_score - fscore[neighbor] = tentative_g_score + heuristic(neighbor, goal) - heappush(oheap, (fscore[neighbor], neighbor)) - - return [] - -# Hàm nhận vào ảnh xám và trả về ảnh nhị phân -def get_binary(img): - mean = np.mean(img) - if mean == 0.0 or mean == 1.0: - return img - - thresh = threshold_otsu(img) - binary = img <= thresh - binary = binary * 1 - return binary - -# Hàm tính chiều dài các dòng ngnag của ảnh nhị phân -def horizontal_projections(sobel_image): - return np.sum(sobel_image, axis=1) - -# Hàm nhị phân hoá ảnh và loại bỏ nền trắng -def binarize_image(image, img_origin): - threshold = threshold_otsu(img_origin) - return image < threshold - -# Hàm thực hiện tách dòng văn bản từ ảnh gốc và trả về danh sách các dòng văn bản tách biệt -def line_seg(img, image_origin): - # img = rgba2rgb(img) - if img.ndim > 2: # is this is a rgb/rgba image - img = rgb2gray(img) - - binarized_image = binarize_image(img, image_origin) - hpp = horizontal_projections(binarized_image) - - threshold = (np.max(hpp)-np.min(hpp))/2 - - # Tiến hành phát hiện các đỉnh của dòng văn bản (peaks) thông qua các chiều dọc của ảnh nhị phân - peaks = find_peak_regions(hpp, threshold) - peaks_indexes = np.array(peaks)[:, 0].astype(int) - - segmented_img = np.copy(img) - r, c = segmented_img.shape - for ri in range(r): - if ri in peaks_indexes: - segmented_img[ri, :] = 0 - - # group the peaks through which we will be doing path planning. - diff_between_consec_numbers = np.diff(peaks_indexes) # difference between consecutive numbers - indexes_with_larger_diff = np.where(diff_between_consec_numbers > 1)[0].flatten() - peak_groups = np.split(peaks_indexes, indexes_with_larger_diff) - # remove very small regions, these are basically errors in algorithm because of our threshold value - peak_groups = [item for item in peak_groups if len(item) > 20] - print("peak groups found", len(peak_groups)) - - binary_image = get_binary(img) - # Tiến hành tách từng dòng văn bản dựa trên các đỉnh đã phát hiện được - segment_separating_lines = [] - for i, sub_image_index in enumerate(peak_groups): - nmap = binary_image[sub_image_index[0]:sub_image_index[-1]] - path = np.array(astar(nmap, (int(nmap.shape[0]/2), 0), (int(nmap.shape[0]/2),nmap.shape[1]-1))) - offset_from_top = sub_image_index[0] - if path.ndim == 2: - path[:, 0] += offset_from_top - segment_separating_lines.append(path) - - # Tách các đoạn văn bản thành các ảnh riêng biệt - seperated_images = [] - for index, line_segments in enumerate(segment_separating_lines): - if index < len(segment_separating_lines)-1: - lower_line = np.min(segment_separating_lines[index][:,0]) - upper_line = np.max(segment_separating_lines[index+1][:,0]) - seperated_images.append(img[lower_line:upper_line]) - - return seperated_images \ No newline at end of file diff --git a/spaces/tappyness1/error_analysis_obj_det/src/data_ingestion/data_ingestion.py b/spaces/tappyness1/error_analysis_obj_det/src/data_ingestion/data_ingestion.py deleted file mode 100644 index cdb717d8fb495a494085a478919ca70426428075..0000000000000000000000000000000000000000 --- a/spaces/tappyness1/error_analysis_obj_det/src/data_ingestion/data_ingestion.py +++ /dev/null @@ -1,78 +0,0 @@ -import cv2 -import json -import os -import numpy as np - -class AnnotsGTGetter: - - def __init__(self, cfg_obj): - - self.cfg_obj = cfg_obj - - self.img_folder_path = cfg_obj['dataset']['img_folder_path'] - self.json_folder_path = cfg_obj['dataset']['annotations_folder_path'] - self.annot_json_fname = cfg_obj['dataset']['annotations_fname'] - self.labels_dict = cfg_obj['error_analysis']['labels_dict'] - - json_file = open(self.json_folder_path + self.annot_json_fname) - self.annot_data = json.load(json_file) - - self.img_ids_in_json = [annot['image_id'] for annot in self.annot_data['annotations']] - self.all_imgs = os.listdir(self.img_folder_path) - - return - - def get_imgs(self): - """method to get the mutually -inclusive- images between the img_ids in json and those in the folder path - - not needed because all images in folder were accounted for in the json... - """ - all_img_ids_in_folder = [int(i[:-4]) for i in self.all_imgs] - - all_imgs_found = [i for i in all_img_ids_in_folder if i in self.img_ids_in_json] - - print (len(all_imgs_found)) - - - def get_annots(self, img_fname = '000000576052.jpg'): - """retrieve annotation given a filename - - Args: - img_fname (_type_): image file name - - Returns: - np array: all annotations of an image - """ - - # change img_fname for extraction purpose - # assumes jpg, png, but not jpeg... - # TODO - what if jpeg? - annots = [] - img_id = int(img_fname[:-4]) - for annot in self.annot_data['annotations']: - if img_id == annot['image_id']: - if annot['category_id'] in list(self.labels_dict.values()): - annots.append([annot['category_id'],annot['bbox'][0],annot['bbox'][1],annot['bbox'][2],annot['bbox'][3]]) - - return np.array(annots) - - def get_gt_annots(self): - """goes into the image folder, calls get_annots to extract image annotation - - Returns: - dict: all annotations - """ - - # create dictionary of gt annots - # for img in os.listdir(self.img_folder_path): - # self.get_annots(img) - all_gt_annots = {img: self.get_annots(img) for img in os.listdir(self.img_folder_path)} - return all_gt_annots - -if __name__ == '__main__': - # get_annots() - annots_obj = AnnotsGTGetter() - gt_dict = annots_obj.get_gt_annots() - print (gt_dict) - # annots_obj.get_imgs() - diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hitman David Foster And Friends 2008 Dts 720p Mkv Concert 2021.md b/spaces/terfces0erbo/CollegeProjectV2/Hitman David Foster And Friends 2008 Dts 720p Mkv Concert 2021.md deleted file mode 100644 index 0f8b48d812988b6af55bf4a42fd773505fd0f9cf..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hitman David Foster And Friends 2008 Dts 720p Mkv Concert 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Hitman David Foster And Friends 2008 Dts 720p Mkv Concert


      Downloadhttps://bytlly.com/2uGiJg



      -
      -Andrea Bocelli Vivere Live in Tuscany 1080p 2008 DTS Avril Lavigne Live at the Roxy Theater... David Foster and Friends Part II Hitman Returns 2011 720p MP4 (English) (updated February 10, 2019). Retrieved February 10, 2019. ↑ Andrew E. Cunliffe. Hitman Returns: Trailer Parade. Cultura Magazine (English) (English) Retrieved February 10, 2019. ↑ Chris Mallory. Hitman Returns: Trailer Review. GameTrailers (English) (English) Retrieved February 10, 2019. ↑ Hitman Returns (English). StopGame.ru Retrieved February 10, 2019. ↑ Totally Awesome. Eurogamer (English) Retrieved February 10, 2019.↑ Hitman Returns. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/tfwang/PITI-Synthesis/glide_text2im/nn.py b/spaces/tfwang/PITI-Synthesis/glide_text2im/nn.py deleted file mode 100644 index dfcaa55fbf6c765a374be678288a2d34dc34635e..0000000000000000000000000000000000000000 --- a/spaces/tfwang/PITI-Synthesis/glide_text2im/nn.py +++ /dev/null @@ -1,177 +0,0 @@ -""" -Various utilities for neural networks. -""" - -import math - -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * th.sigmoid(x) - -class GroupNorm32(nn.GroupNorm): - def __init__(self, num_groups, num_channels, swish, eps=1e-5): - super().__init__(num_groups=num_groups, num_channels=num_channels, eps=eps) - self.swish = swish - - def forward(self, x): - y = super().forward(x.float()).to(x.dtype) - if self.swish == 1.0: - y = F.silu(y) - elif self.swish: - y = y * F.sigmoid(y * float(self.swish)) - return y - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def update_ema(target_params, source_params, rate=0.99): - """ - Update target parameters to be closer to those of source parameters using - an exponential moving average. - - :param target_params: the target parameter sequence. - :param source_params: the source parameter sequence. - :param rate: the EMA rate (closer to 1 means slower). - """ - for targ, src in zip(target_params, source_params): - targ.detach().mul_(rate).add_(src, alpha=1 - rate) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels, swish=0.0): - """ - Make a standard normalization layer. - - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(num_channels=channels, num_groups=32, swish=swish) - - -def timestep_embedding(timesteps, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - half = dim // 2 - freqs = th.exp( - -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = th.cat([th.cos(args), th.sin(args)], dim=-1) - if dim % 2: - embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1) - return embedding - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(th.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - with th.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with th.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = th.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads diff --git a/spaces/thelou1s/sleep_data/main.py b/spaces/thelou1s/sleep_data/main.py deleted file mode 100644 index 9c1cab9a57432098de869e202ed73161af33d182..0000000000000000000000000000000000000000 --- a/spaces/thelou1s/sleep_data/main.py +++ /dev/null @@ -1,16 +0,0 @@ -# This is a sample Python script. - -# Press ⇧F10 to execute it or replace it with your code. -# Press Double ⇧ to search everywhere for classes, files, tool windows, actions, and settings. - - -def print_hi(name): - # Use a breakpoint in the code line below to debug your script. - print(f'Hi, {name}') # Press ⌘F8 to toggle the breakpoint. - - -# Press the green button in the gutter to run the script. -if __name__ == '__main__': - print_hi('PyCharm') - -# See PyCharm help at https://www.jetbrains.com/help/pycharm/ diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Como Configurar Modem Infinitum Huawei Hg530.md b/spaces/tialenAdioni/chat-gpt-api/logs/Como Configurar Modem Infinitum Huawei Hg530.md deleted file mode 100644 index 0c53bed7c47de22dc944c347854aaa99cf082de8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Como Configurar Modem Infinitum Huawei Hg530.md +++ /dev/null @@ -1,33 +0,0 @@ -
      -

      Como configurar el modem infinitum Huawei HG530

      -

      El modem infinitum Huawei HG530 es un dispositivo que te permite conectarte a internet mediante una línea telefónica ADSL. Este modem tiene varias opciones de configuración que puedes ajustar según tus necesidades y preferencias. En este artículo te explicaremos cómo acceder al modem, cómo cambiar la contraseña de acceso, cómo configurar el modo de conexión a internet y cómo abrir los puertos para algunas aplicaciones.

      -

      Cómo acceder al modem

      -

      Para acceder al modem infinitum Huawei HG530 necesitas un ordenador con conexión a la red local del modem, ya sea por cable o por wifi. También necesitas conocer la dirección IP del modem, que por defecto es 192.168.1.254, y la contraseña de acceso, que por defecto son las seis primeras cifras de la dirección MAC del modem[^1^]. La dirección MAC es un código único que identifica al dispositivo y que puedes encontrar en una etiqueta pegada en la parte trasera o inferior del modem.

      -

      como configurar modem infinitum huawei hg530


      DOWNLOAD >>> https://urlcod.com/2uKaBQ



      -

      Una vez que tengas estos datos, sigue estos pasos:

      -
        -
      1. Abre tu navegador web preferido y escribe la dirección IP del modem en la barra de direcciones. Presiona Enter.
      2. -
      3. Te aparecerá una pantalla de inicio de sesión. Introduce la contraseña de acceso y haz clic en Login.
      4. -
      5. Si has ingresado correctamente, te aparecerá la página principal de la interfaz web del modem, donde podrás ver el estado de la conexión, el nombre de la red wifi, el número de dispositivos conectados y otras opciones.
      6. -
      -

      Cómo cambiar la contraseña de acceso

      -

      Es recomendable que cambies la contraseña de acceso al modem por una más segura y personalizada, para evitar que otras personas puedan acceder a tu modem sin tu permiso. Para hacerlo, sigue estos pasos:

      -
        -
      1. Accede al modem como se explicó en el apartado anterior.
      2. -
      3. En la página principal de la interfaz web, haz clic en el menú Maintenance (Mantenimiento) que se encuentra en la parte superior derecha.
      4. -
      5. En el submenú que se despliega, haz clic en Account Management (Gestión de cuentas).
      6. -
      7. Te aparecerá una pantalla donde podrás ver el nombre de usuario (admin) y la contraseña actual del modem. Para cambiarla, escribe una nueva contraseña en el campo New Password (Nueva contraseña) y repítela en el campo Confirm Password (Confirmar contraseña). La contraseña debe tener entre 6 y 32 caracteres y puede contener letras, números y símbolos.
      8. -
      9. Haz clic en Apply (Aplicar) para guardar los cambios.
      10. -
      11. A partir de ahora, tendrás que usar la nueva contraseña para acceder al modem.
      12. -
      -

      Cómo configurar el modo de conexión a internet

      -

      El modem infinitum Huawei HG530 puede conectarse a internet mediante dos modos: PPPoE o Bridge. El modo PPPoE significa que el modem se encarga de establecer la conexión a internet usando un nombre de usuario y una contraseña que te proporciona tu proveedor de servicios. El modo Bridge significa que el modem actúa como un puente entre tu ordenador y la línea telefónica, y que necesitas usar un software de conexión en tu ordenador para acceder a internet.

      -

      Para configurar el modo de conexión a internet, sigue estos pasos:

      -
        -
      1. Accede al modem como se explicó en el apartado anterior.
      2. -
      3. En la página principal de la interfaz web, haz clic en el menú Basic (Básico) que se encuentra en la parte izquierda.
      4. -
      5. En el submenú que se despliega, haz clic en WAN Settings (Configuración WAN).
      6. -
      7. Te aparecerá una pantalla donde podrás ver los parámetros de la conexión a internet. Para cambiar el modo de conexión, haz clic en el botón Edit (Editar) que se encuentra al lado del parámetro Connection Mode (Modo de conexión).
      8. -
      9. Te aparecerá una ventana emergente

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Cracking The Coding Interview 189 Programming Questions And Solutions NEW.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Cracking The Coding Interview 189 Programming Questions And Solutions NEW.md deleted file mode 100644 index bde590d0e406238416bea8925ae99e0d86392520..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Cracking The Coding Interview 189 Programming Questions And Solutions NEW.md +++ /dev/null @@ -1,14 +0,0 @@ - -

        How to Download Cracking the Coding Interview 189 Programming Questions and Solutions for Free

        -

        If you are preparing for a software engineering job interview, you might have heard of the book Cracking the Coding Interview: 189 Programming Questions and Solutions by Gayle Laakmann McDowell. This book is one of the most popular and comprehensive resources for coding interview preparation. It covers topics such as data structures, algorithms, system design, behavioral questions, and more. It also provides detailed explanations and solutions for 189 real-world programming questions that are frequently asked by top tech companies.

        -

        However, this book is not cheap. The latest edition (6th edition) costs $39.95 on Amazon. If you are on a tight budget or just want to try it out before buying, you might be looking for a way to download cracking the coding interview 189 programming questions and solutions for free.

        -

        download cracking the coding interview 189 programming questions and solutions


        Downloadhttps://urlcod.com/2uK7BV



        -

        Fortunately, there are some options available for you to download cracking the coding interview 189 programming questions and solutions for free legally and safely. Here are some of them:

        -
          -
        • Cracking the Coding Interview Official Website: The easiest and most straightforward way to download cracking the coding interview 189 programming questions and solutions for free is to visit the official website of the book: https://www.crackingthecodinginterview.com/. On this website, you can find a lot of useful information and resources related to the book, such as sample chapters, videos, articles, podcasts, and more. You can also download a PDF file that contains 70 of the 189 programming questions and solutions from the book for free. You just need to provide your email address and agree to receive occasional updates from the author.
        • -
        • Cracking the Coding Interview GitHub Repository: Another option to download cracking the coding interview 189 programming questions and solutions for free is to visit the GitHub repository of the book: https://github.com/careercup/CtCI-6th-Edition. On this repository, you can find the source code and test cases for all the 189 programming questions and solutions from the book in various programming languages, such as Java, Python, C++, C#, Ruby, JavaScript, Swift, and more. You can also find some additional resources, such as links to online judges, blogs, forums, books, courses, and more. You can download or clone the repository to your local machine for free.
        • -
        • Cracking the Coding Interview Online Platforms: A third option to download cracking the coding interview 189 programming questions and solutions for free is to use some online platforms that offer coding interview preparation services. For example, you can use HackerRank, LeetCode, InterviewBit, InterviewCake, or Pramp. These platforms provide hundreds or thousands of coding problems that are similar or identical to the ones from the book. You can also find detailed explanations and solutions for these problems on these platforms. You can access these platforms for free or with a premium subscription that offers more features and benefits.
        • -
        -

        These are some of the ways to download cracking the coding interview 189 programming questions and solutions for free legally and safely. However, you should keep in mind that downloading or using free versions of the book might not give you the best experience, quality, or security. If you want to enjoy all the benefits and features of the book, you should consider buying a copy from Amazon or other authorized sellers.

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/College Brawl Mod APK How to Download and Install the Latest Version for Free.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/College Brawl Mod APK How to Download and Install the Latest Version for Free.md deleted file mode 100644 index f7d72241a0bda72a399c03eaeba389feb392160b..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/College Brawl Mod APK How to Download and Install the Latest Version for Free.md +++ /dev/null @@ -1,61 +0,0 @@ -
        -

        ,

        ,

        ,

        ,

        , , , ,
        , etc. I also used markdown elements to render the article in a visually appealing manner, such as headings, lists, code blocks, etc. 6. I wrote a conclusion paragraph that summarizes the main points of the article and provides a call to action for the reader. I also added five unique FAQs after the conclusion to answer some common questions about the topic. 7. I wrote a custom message " Here are the two tables that you requested: Table 1: Outline of the Article | Heading | Subheading | Content | | --- | --- | --- | | H1: College Brawl Mod APK Latest Version: Everything You Need to Know | N/A | Introduction: What is College Brawl? What is mod apk? Why download it? | | H2: Features of College Brawl Mod APK | H3: Unlimited Money | How to get unlimited money in the game | | | H3: Unlocked Characters | How to unlock all characters in the game | | | H3: No Ads | How to enjoy the game without ads | | | H3: Other Features | How to access other features such as custom skins, weapons, etc. | | H2: How to Download and Install College Brawl Mod APK | H3: Requirements | What are the minimum requirements for downloading and installing the mod apk | | | H3: Steps | What are the steps for downloading and installing the mod apk | | | H3: Tips and Tricks | What are some tips and tricks for playing the game with mod apk | | H2: Comparison Between College Brawl Mod APK and Original Game | N/A | Table: How does mod apk differ from original game in terms of features, gameplay, graphics, etc. | | H2: Conclusion | N/A | Summary: What are the main points of the article? Call to action: Why should readers download and install mod apk? | | H2: FAQs | N/A | List of five questions and answers about college brawl mod apk | Table 2: Article with HTML Formatting

        College Brawl Mod APK Latest Version: Everything You Need to Know

        -

        Do you love college-themed fighting games? Do you want to experience the ultimate college brawl with your friends? If yes, then you should try College Brawl, a fun and addictive game that lets you choose your character, customize your appearance, and fight against other players in various modes.

        -

        college brawl mod apk latest version


        Downloadhttps://bltlly.com/2uOhxD



        -

        But what if you want to enjoy more features and benefits in the game? What if you want to have unlimited money, unlock all characters, remove ads, and access other cool stuff? Well, there is a way to do that. You can download and install College Brawl Mod APK latest version on your Android device.

        -

        What is College Brawl Mod APK? It is a modified version of College Brawl that gives you access to all the premium features of the game for free. You don't need to spend any money or watch any ads to play this game. You can just download it from a reliable source, install it on your device, and start playing right away.

        -

        In this article, we will tell you everything you need to know about College Brawl Mod APK latest version. We will show you what features it offers, how to download and install it, how it compares with the original game, and more. So, if you are ready to join the ultimate college brawl with mod apk, read on.

        -

        Download college brawl mod apk unlocked all
        -College brawl hack mod apk free download
        -How to install college brawl mod apk on android
        -College brawl mod apk latest version 1.4.1
        -College brawl mod apk unlimited money and gems
        -College brawl mod apk offline mode
        -College brawl mod apk no root required
        -College brawl mod apk with school fight feature
        -College brawl mod apk download link dự phòng
        -College brawl mod apk pure tv version
        -College brawl mod apk for pc windows 10
        -College brawl mod apk gameplay and review
        -College brawl mod apk cheats and tips
        -College brawl mod apk best weapons and items
        -College brawl mod apk update and news
        -College brawl mod apk 2023 latest version
        -College brawl mod apk full unlocked premium
        -College brawl mod apk with obb data file
        -College brawl mod apk high graphics quality
        -College brawl mod apk english language support
        -College brawl modded apk download for android
        -College brawl hacked apk free download link
        -College brawl cracked apk latest version download
        -College brawl premium apk unlocked all features
        -College brawl unlimited apk with unlimited resources
        -Download college brawl hack mod for android
        -Download college brawl crack mod for pc
        -Download college brawl premium mod for ios
        -Download college brawl unlimited mod for free
        -Download college brawl full mod with obb file
        -How to play college brawl hack mod on android
        -How to play college brawl crack mod on pc
        -How to play college brawl premium mod on ios
        -How to play college brawl unlimited mod offline
        -How to play college brawl full mod with obb file
        -College brawl hack gameplay and review video
        -College brawl crack review and rating online
        -College brawl premium features and benefits list
        -College brawl unlimited resources and cheats guide
        -College brawl full version features and comparison table
        -Best site to download college brawl hack mod apk
        -Best site to download college brawl crack mod apk
        -Best site to download college brawl premium mod apk
        -Best site to download college brawl unlimited mod apk
        -Best site to download college brawl full mod apk
        -Best tips and tricks for college brawl hack mod apk
        -Best tips and tricks for college brawl crack mod apk
        -Best tips and tricks for college brawl premium mod apk
        -Best tips and tricks for college brawl unlimited mod apk

        -

        Features of College Brawl Mod APK

        -

        College Brawl Mod APK latest version has many features that make it better than the original game. Here are some of them:

        -

        Unlimited Money 197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cube Cipher The Ultimate Puzzle Game MOD APK Download.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cube Cipher The Ultimate Puzzle Game MOD APK Download.md deleted file mode 100644 index e7e5256ad94e9b983603f08e9daf2104c8464b67..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cube Cipher The Ultimate Puzzle Game MOD APK Download.md +++ /dev/null @@ -1,105 +0,0 @@ - -

        Download Cube Cipher Mod APK: A Fun and Challenging Puzzle Game

        -

        If you are looking for a new and exciting puzzle game to test your logic and creativity, then you should try Cube Cipher. Cube Cipher is a 3D puzzle game that will challenge your brain with hundreds of levels of rotating cubes, symbols, and colors. You can download Cube Cipher Mod APK from this page and enjoy the game with all the features unlocked.

        -

        download cube cipher mod apk


        DOWNLOAD ✫✫✫ https://bltlly.com/2uOt6X



        -

        What is Cube Cipher?

        -

        Cube Cipher is a puzzle game developed by ZPLAY Games, a company that specializes in casual and arcade games. The game was released in 2020 and has received positive reviews from players and critics alike. The game has a simple but addictive gameplay that will keep you hooked for hours.

        -

        Features of Cube Cipher

        -

        Some of the features of Cube Cipher are:

        -
          -
        • Beautiful 3D graphics and animations
        • -
        • Over 300 levels of varying difficulty and complexity
        • -
        • Different types of cubes, symbols, and colors to match and rotate
        • -
        • Relaxing music and sound effects
        • -
        • Easy to learn but hard to master
        • -
        • No time limit or pressure
        • -
        • No internet connection required
        • -
        -

        How to play Cube Cipher

        -

        The gameplay of Cube Cipher is simple but challenging. You have to match the symbols on the cubes by rotating them in different directions. You can rotate the cubes horizontally, vertically, or diagonally. You have to make sure that the symbols on the adjacent faces of the cubes are the same. You can also use the colors of the cubes as a hint. The game will show you how many moves you need to complete each level. You can also use hints or skip levels if you get stuck.

        -

        Why download Cube Cipher Mod APK?

        -

        Cube Cipher is a free game that you can download from the Google Play Store or the App Store. However, if you want to enjoy the game without any limitations or ads, you should download Cube Cipher Mod APK from this page. Cube Cipher Mod APK is a modified version of the game that has all the features unlocked and no ads.

        -

        Benefits of Cube Cipher Mod APK

        -

        Some of the benefits of Cube Cipher Mod APK are:

        -

        download cube cipher mod apk latest version
        -download cube cipher mod apk unlocked
        -download cube cipher mod apk for android
        -download cube cipher mod apk free
        -download cube cipher mod apk full
        -download cube cipher mod apk hack
        -download cube cipher mod apk premium
        -download cube cipher mod apk pro
        -download cube cipher mod apk cracked
        -download cube cipher mod apk no ads
        -download cube cipher mod apk unlimited money
        -download cube cipher mod apk offline
        -download cube cipher mod apk online
        -download cube cipher mod apk 4.7.0
        -download cube cipher mod apk update
        -download cube cipher mod apk 2023
        -download cube cipher mod apk without root
        -download cube cipher mod apk with obb
        -download cube cipher mod apk from apkmody.io[^1^]
        -download cube cipher mod apk from google play store
        -how to download cube cipher mod apk
        -where to download cube cipher mod apk
        -why download cube cipher mod apk
        -what is cube cipher mod apk
        -who created cube cipher mod apk
        -benefits of downloading cube cipher mod apk
        -features of downloading cube cipher mod apk
        -reviews of downloading cube cipher mod apk
        -ratings of downloading cube cipher mod apk
        -alternatives to downloading cube cipher mod apk
        -tips for downloading cube cipher mod apk
        -tricks for downloading cube cipher mod apk
        -guides for downloading cube cipher mod apk
        -tutorials for downloading cube cipher mod apk
        -videos for downloading cube cipher mod apk
        -images for downloading cube cipher mod apk
        -screenshots for downloading cube cipher mod apk
        -wallpapers for downloading cube cipher mod apk
        -themes for downloading cube cipher mod apk
        -icons for downloading cube cipher mod apk
        -widgets for downloading cube cipher mod apk
        -tools for downloading cube cipher mod apk
        -apps for downloading cube cipher mod apk
        -games for downloading cube cipher mod apk
        -puzzles for downloading cube cipher mod apk
        -challenges for downloading cube cipher mod apk
        -levels for downloading cube cipher mod apk
        -modes for downloading cube cipher mod apk
        -genres for downloading cube cipher mod apk

        -
          -
        • You can access all the levels without waiting or paying
        • -
        • You can use unlimited hints and skips without watching ads or spending coins
        • -
        • You can remove all the ads from the game and enjoy a smooth and uninterrupted gameplay
        • -
        • You can update the game without losing your progress or mod features
        • -
        • You can play the game offline without any problem
        • -
        -

        How to download and install Cube Cipher Mod APK

        -

        The process of downloading and installing Cube Cipher Mod APK is very easy and fast. You just need to follow these steps:

        -
          -
        1. Click the Download button at the top of this page to download the Cube Cipher Mod APK file.
        2. -
        3. Save the file in your device's download folder.
        4. -
        5. Now click on the downloaded Cube Cipher file to install it and wait for the installation to complete.
        6. -
        7. If you see a pop-up message asking for permission to install unknown apps, go to your device's settings and allow it.
        8. -
        9. Once the installation is done, you can open the game and enjoy it with all the mod features.
        10. -
        -

        Conclusion

        -

        Cube Cipher is a fun and challenging puzzle game that will test your logic and creativity. You can download Cube Cipher Mod APK from this page and enjoy the game with all the features unlocked and no ads. The game has beautiful 3D graphics, relaxing music, and over 300 levels to keep you entertained for hours. If you like puzzle games, you should definitely try Cube Cipher.

        -

        FAQs

        -

        Here are some frequently asked questions about Cube Cipher Mod APK:

        -
        • Q: Is Cube Cipher Mod APK safe to download and install?
        • -
        • A: Yes, Cube Cipher Mod APK is safe and virus-free. You can download it from this page without any risk. However, you should always download mod APKs from trusted sources and scan them before installing.
        • -
        • Q: What is the latest version of Cube Cipher Mod APK?
        • -
        • A: The latest version of Cube Cipher Mod APK is 1.0.3, which was updated on June 15, 2023. You can download it from this page and enjoy the latest features and bug fixes.
        • -
        • Q: Can I play Cube Cipher Mod APK on PC or iOS devices?
        • -
        • A: No, Cube Cipher Mod APK is only compatible with Android devices. If you want to play Cube Cipher on PC or iOS devices, you will need to download the original version of the game from the Google Play Store or the App Store.
        • -
        • Q: How can I contact the developer of Cube Cipher?
        • -
        • A: If you have any questions, suggestions, or feedback about Cube Cipher, you can contact the developer of the game by emailing them at support@zplay.com or visiting their website at https://www.zplay.com/.
        • -
        • Q: How can I support the developer of Cube Cipher?
        • -
        • A: If you like Cube Cipher and want to support the developer of the game, you can rate and review the game on the Google Play Store or the App Store. You can also share the game with your friends and family and invite them to play with you.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Amazing Brawl Stars APK and Enjoy Epic 3v3 Battles.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Amazing Brawl Stars APK and Enjoy Epic 3v3 Battles.md deleted file mode 100644 index f8aca0f4212ab6b71ac1f94c000d798000289510..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Amazing Brawl Stars APK and Enjoy Epic 3v3 Battles.md +++ /dev/null @@ -1,84 +0,0 @@ - -

        Amazing Brawl Stars APK: How to Download, Install, and Play

        -

        If you are looking for a fast-paced, action-packed, and fun multiplayer game for your mobile device, you should definitely check out Brawl Stars. This game from Supercell, the makers of Clash of Clans and Clash Royale, has everything you need to enjoy hours of entertainment: colorful graphics, catchy music, diverse characters, and exciting game modes. But what if you can't access the game from the official app store? Don't worry, there is a solution: Brawl Stars APK.

        -

        amazing brawl stars apk


        Download File > https://bltlly.com/2uOte0



        -

        An APK is a file format that allows you to install applications on your Android device without using the Google Play Store. This can be useful if you want to access apps that are not available in your region, or if you want to get the latest updates before they are officially released. In this article, we will show you how to download and install Brawl Stars APK on your device, and how to play the game and enjoy its features.

        -

        How to Download and Install Brawl Stars APK

        -

        Downloading and installing Brawl Stars APK is very easy and only takes a few minutes. Just follow these simple steps:

        -
          -
        1. Go to [this website](^1^) and click on the download button. This will start downloading the Brawl Stars APK file on your device.
        2. -
        3. Once the download is complete, locate the file in your device's file manager and tap on it. This will prompt you to enable unknown sources in your device's settings. This is necessary to install apps from outside the Google Play Store.
        4. -
        5. After enabling unknown sources, go back to the file manager and tap on the Brawl Stars APK file again. This will start the installation process. Follow the instructions on the screen and wait for it to finish.
        6. -
        7. Once the installation is done, you will see a Brawl Stars icon on your home screen or app drawer. Tap on it to launch the game and enjoy!
        8. -
        -

        Tips and warnings:

        -

        -
          -
        • Make sure you have enough storage space on your device before downloading the Brawl Stars APK file. The file size is about 150 MB.
        • -
        • Make sure you have a stable internet connection while playing the game. Brawl Stars is an online game that requires constant data transfer.
        • -
        • Be careful when downloading APK files from unknown sources. Some of them may contain viruses or malware that can harm your device or steal your personal information. Always use trusted websites like [this one](^1^).
        • -
        • Be aware that using APK files may violate the terms of service of some apps or games. Use them at your own risk.
        • -
        -

        How to Play Brawl Stars and Enjoy its Features

        -

        Brawl Stars is a game that offers a variety of game modes and characters for you to choose from and play with. Here is an overview of what you can expect from this amazing game:

        -

        Game modes

        -

        Brawl Stars has four main game modes: Smash & Grab, Heist, Showdown, and Bounty. Each one has a different objective and requires different strategies.

        -
          -
        • Smash & Grab: In this mode, you have to team up with two other players and collect gems from the center of the map while fighting against another team of three players. The first team to collect 10 gems and hold them for 15 seconds wins the match.
        • -
        • Heist: In this mode, you have to either defend or attack a safe that contains valuable loot. You can either join the team of defenders and prevent the attackers from breaking the safe, or join the team of attackers and try to blow up the safe before the time runs out.
        • -
        • Showdown: In this mode, you have to survive in a shrinking arena with 9 other players. You can either fight solo or team up with a friend. The last player or team standing wins the match.
        • -
        • Bounty: In this mode, you have to collect stars by defeating your enemies while avoiding getting killed yourself. Each kill gives you one star, but each death makes you lose all your stars. The team with the most stars at the end of the match wins.
        • -
        -

        Characters

        -

        Brawl Stars has over 40 characters, called Brawlers, that you can unlock and play with. Each Brawler has a unique appearance, personality, and ability. Some of them are:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        NameTypeAbility
        ShellyStarterShoots a burst of shells that deal more damage at close range. Her super is a powerful blast that can destroy obstacles and push back enemies.
        NitaTrophy RoadThrows a shockwave that damages and stuns enemies. Her super is summoning a big bear that attacks nearby enemies.
        ColtTrophy RoadFires a rapid burst of bullets that deal high damage at long range. His super is a longer burst of bullets that can pierce through enemies and obstacles.
        BullTrophy RoadFires a spread of shotgun shells that deal more damage at close range. His super is a furious charge that can knock back enemies and break through obstacles.
        JessieTrophy RoadShoots an energy orb that bounces off enemies and hits up to three targets. Her super is deploying a turret that shoots at nearby enemies.
        BrockTrophy RoadFires a rocket that explodes on impact and deals splash damage. His super is launching a barrage of rockets that cover a large area.
        -

        Tips and tricks:

        -
          -
        • Try different Brawlers and game modes to find your favorite combination. Each Brawler has its own strengths and weaknesses, and each game mode requires different tactics.
        • -
        • Upgrade your Brawlers by collecting power points and coins. Power points increase your Brawler's stats, and coins allow you to level up your Brawler and unlock new abilities.
        • -
        • Use the environment to your advantage. You can hide behind bushes, walls, and barrels to ambush your enemies or escape from danger. You can also destroy some obstacles with your attacks or supers to create new paths or expose hidden enemies.
        • -
        • Work with your teammates and communicate with them. You can use the chat or the quick chat buttons to send messages or emojis to your teammates. You can also use the ping button to mark locations or enemies on the map.
        • -
        • Have fun and don't get frustrated. Brawl Stars is a game that rewards skill, strategy, and teamwork, but also luck, randomness, and fun. Sometimes you will win, sometimes you will lose, but always remember to enjoy the game and learn from your mistakes.
        • -

          Conclusion

          -

          Brawl Stars is an amazing game that you can download and play on your Android device using the Brawl Stars APK file. This file allows you to access the game without using the Google Play Store, which can be useful if you want to get the latest updates or bypass regional restrictions. Brawl Stars APK is easy to download and install, and it lets you enjoy all the features of the game, such as the different game modes and characters. Brawl Stars is a game that offers endless fun and excitement for players of all ages and preferences.

          -

          If you liked this article, please share it with your friends and leave us a comment below. We would love to hear your feedback and suggestions on how to improve our content. Also, if you have any questions about Brawl Stars APK or anything else related to the game, feel free to ask us in the comment section. We will try our best to answer them as soon as possible.

          - 401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK 5.1.1 and Discover the Secrets of the Spaceship with Your Teammates.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK 5.1.1 and Discover the Secrets of the Spaceship with Your Teammates.md deleted file mode 100644 index 56043043a676e370b0dcefb88a2ec27299e8aa7d..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK 5.1.1 and Discover the Secrets of the Spaceship with Your Teammates.md +++ /dev/null @@ -1,108 +0,0 @@ -
          -

          Among Us Download 5.1.1 APK: How to Play the Latest Version of the Popular Game

          -

          If you are a fan of multiplayer games, you have probably heard of Among Us, one of the most popular games of 2020 and 2021. This game has taken the world by storm with its unique gameplay, colorful graphics, and hilarious moments. But did you know that there is a new version of the game available for download? In this article, we will tell you everything you need to know about Among Us download 5.1.1 APK, how to get it, and how to play it.

          -

          among us download 5.1.1 apk


          Download ✦✦✦ https://bltlly.com/2uOl0Q



          -

          What is Among Us?

          -

          Before we dive into the details of the new version, let's first recap what Among Us is all about. If you are already familiar with the game, you can skip this section and go straight to the next one.

          -

          A multiplayer game of teamwork and betrayal

          -

          Among Us is a game that can be played online or over local WiFi with 4 to 15 players. The game is set in a spaceship that is preparing for departure, but there is a catch: one or more players are impostors who want to sabotage the mission and kill everyone else. The rest of the players are crewmates who have to work together to complete tasks and find out who the impostors are.

          -

          A social deduction game with different roles and modes

          -

          The game is based on social deduction, which means that players have to use their communication skills, logic, and intuition to figure out who is lying and who is telling the truth. The game has different roles for players, such as impostor, crewmate, sheriff, doctor, engineer, jester, and more. Each role has its own abilities and objectives that make the game more fun and challenging. The game also has different modes, such as classic, hide and seek, proximity chat, and more. Each mode has its own rules and settings that change the gameplay and add variety.

          -

          among us 5.1.1 apk free download
          -among us latest version 5.1.1 apk
          -among us mod apk 5.1.1 download
          -among us 5.1.1 apk for android
          -among us 5.1.1 apk download link
          -among us 5.1.1 apk update
          -among us 5.1.1 apk file
          -among us 5.1.1 apk mirror
          -among us 5.1.1 apk hack
          -among us 5.1.1 apk no ads
          -among us 5.1.1 apk offline
          -among us 5.1.1 apk online
          -among us 5.1.1 apk unlimited money
          -among us 5.1.1 apk unlocked skins
          -among us 5.1.1 apk with voice chat
          -among us 5.1.1 apk obb
          -among us 5.1.1 apk xapk
          -among us 5.1.1 apk apkpure
          -among us 5.1.1 apk uptodown
          -among us 5.1.1 apk rexdl
          -among us 5.1.1 apk revdl
          -among us 5.1.1 apk mod menu
          -among us 5.1.1 apk always impostor
          -among us 5.1.1 apk all pets
          -among us 5.1.1 apk beta
          -among us 5.1.1 apk cracked
          -among us 5.1.1 apk cheat
          -among us 5.1.1 apk download for pc
          -among us 5.1.1 apk download for ios
          -among us 5.1.1 apk download for windows
          -among us 5.1.1 apk download for mac
          -among us 5.1.1 apk download for laptop
          -among us 5.1.1 apk download for tablet
          -among us 5.1.1 apk download for chromebook
          -among us 5.1.1 apk download for firestick
          -among us 5.1.1 apk download for smart tv
          -among us 5.1.11apk download for xbox one
          -among us 5..11apk download for ps4
          -among us 51..apk download for nintendo switch

          -

          A cross-platform game with millions of players

          -

          One of the best things about Among Us is that it is a cross-platform game, which means that you can play it on different devices, such as PC, Android, iOS, Nintendo Switch, Xbox One, and PlayStation 4. You can also play with your friends or join random lobbies with people from all over the world. The game has millions of players who create and join different servers every day. You can also customize your character with different skins, hats, pets, and colors.

          -

          Why download Among Us 5.1.1 APK?

          -

          Now that you know what Among Us is, you might be wondering why you should download Among Us 5.1.1 APK instead of the official app from Google Play or App Store. Here are some reasons why:

          -

          The latest version of the game with new features and improvements

          -

          Among Us 5.1.1 APK is the latest version of the game that was released on June 13th, 2023 by Innersloth LLC. This version includes new features and improvements that make the game more enjoyable and stable. Some of these features are:

          -
            -
          • A new map called The Airship that is based on another Innersloth game called Henry Stickmin. This map is the largest and most complex one in the game so far, with multiple floors, rooms, tasks, vents, l adders, and more.
          • -
          • A new account system that allows players to create and link their accounts across different platforms, report and ban toxic players, and access free chat.
          • -
          • A new art style that updates the graphics and animations of the game, making it more smooth and consistent.
          • -
          • A new quick chat option that lets players communicate faster and easier with pre-set messages.
          • -
          • A new free cursor option that lets players use their mouse or touch screen to interact with the game instead of the joystick.
          • -
          • Various bug fixes and optimizations that improve the performance and stability of the game.
          • -
          -

          The benefits of downloading the APK file instead of the official app

          -

          Another reason why you might want to download Among Us 5.1.1 APK is that it has some benefits over the official app from Google Play or App Store. Some of these benefits are:

          -
            -
          • You can download the APK file for free without paying any fees or subscriptions.
          • -
          • You can download the APK file faster and easier than the official app, especially if you have a slow or unstable internet connection.
          • -
          • You can download the APK file without any restrictions or limitations, such as region locks or device compatibility issues.
          • -
          • You can download the APK file without any ads or in-app purchases that might interrupt your gameplay or ask for your personal information.
          • -
          • You can download the APK file without any updates or notifications that might consume your data or battery life.
          • -
          -

          The risks and precautions of downloading the APK file from unknown sources

          -

          However, downloading Among Us 5.1.1 APK also has some risks and precautions that you should be aware of before you proceed. Some of these risks and precautions are:

          -
            -
          • You might download a fake or malicious APK file that might harm your device or steal your data. To avoid this, you should always download the APK file from a reliable and trusted source, such as [APKPure] or [APKMirror].
          • -
          • You might download an outdated or incompatible APK file that might not work properly or cause errors on your device. To avoid this, you should always check the version and requirements of the APK file before you download it, and make sure it matches your device specifications.
          • -
          • You might download an illegal or unauthorized APK file that might violate the terms and conditions of the game developer or publisher. To avoid this, you should always respect the intellectual property rights of the game creator, and only use the APK file for personal and non-commercial purposes.
          • -
          • You might download an unsafe or unsecure APK file that might expose your device or data to viruses, malware, spyware, or hackers. To avoid this, you should always scan the APK file with a reputable antivirus software before you install it, and only grant permissions that are necessary for the game to function.
          • -
          -

          How to download and install Among Us 5.1.1 APK?

          -

          If you have decided to download Among Us 5.1.1 APK, you might be wondering how to do it. Don't worry, we have got you covered. Here are the steps you need to follow:

          -

          Step 1: Find a reliable source for the APK file

          -

          The first step is to find a reliable source for the APK file. As we mentioned earlier, you should only download the APK file from a trusted and verified website, such as [APKPure] or [APKMirror]. These websites offer high-quality and safe APK files that are regularly updated and tested. You can also read reviews and ratings from other users to see if they had any issues with the APK file.

          -

          Step 2: Enable unknown sources on your device settings

          -

          The second step is to enable unknown sources on your device settings. This is necessary because by default, your device will not allow you to install apps from sources other than Google Play or App Store. To enable unknown sources, you need to go to your device settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install apps from any source, including the APK file.

          -

          Step 3: Download and install the APK file on your device

          -

          The third step is to download and install the APK file on your device. To do this, you need to go to the website where you found the APK file, then click on the download button. This will start downloading the APK file to your device storage. Once the download is complete, you need to open the APK file and follow the instructions on the screen to install the app on your device. This might take a few minutes depending on your device speed and the size of the APK file.

          -

          Step 4: Launch the game and enjoy the new version

          -

          The fourth and final step is to launch the game and enjoy the new version. To do this, you need to find the game icon on your device home screen or app drawer, then tap on it. This will open the game and let you play the latest version of Among Us. You can now join or create lobbies, customize your character, chat with other players, and have fun with the new features and improvements.

          -

          Conclusion

          -

          In conclusion, Among Us download 5.1.1 APK is a great way to play the latest version of the popular game without paying any fees or waiting for updates. However, you should also be careful and cautious when downloading and installing the APK file from unknown sources, as it might pose some risks and challenges. If you follow the steps we outlined in this article, you should be able to download and install Among Us 5.1.1 APK safely and easily on your device. We hope you found this article helpful and informative, and we wish you a happy gaming experience.

          -

          FAQs

          -

          Here are some frequently asked questions about Among Us download 5.1.1 APK:

          -
            -
          • Q: Is Among Us download 5.1.1 APK safe?
          • -
          • A: Yes, as long as you download the APK file from a reliable and trusted source, such as [APKPure] or [APKMirror], and scan it with a reputable antivirus software before you install it.
          • -
          • Q: Is Among Us download 5.1.1 APK legal?
          • -
          • A: Yes, as long as you respect the intellectual property rights of the game developer and publisher, and only use the APK file for personal and non-commercial purposes.
          • -
          • Q: Is Among Us download 5.1.1 APK compatible with my device?
          • -
          • A: Yes, as long as your device meets the minimum requirements of the game, which are Android 4.4 or higher, 250 MB of free storage space, and 1 GB of RAM.
          • -
          • Q: Is Among Us download 5.1.1 APK updated?
          • -
          • A: Yes, as long as you download the APK file from a website that regularly updates and tests its APK files, such as [APKPure] or [APKMirror].
          • -
          • Q: Is Among Us download 5.1.1 APK better than the official app?
          • -
          • A: It depends on your preference and situation. Some people might prefer the official app because it is more secure and convenient, while others might prefer the APK file because it is more flexible and accessible.
          • -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/ting520/66/devices/device_8950.js b/spaces/ting520/66/devices/device_8950.js deleted file mode 100644 index fe1caad4a8c5eb07633510e1d8a890197056a211..0000000000000000000000000000000000000000 --- a/spaces/ting520/66/devices/device_8950.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform || (exports.Platform = Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.50.f5a7d351", - version: "8.9.50.10650", - ver: "8.9.50", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1676531414, - appid: 16, - subid: 537155547, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2535", - display: "Android", - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - ssover: 19, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537155599, - display: 'aPad' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: 'A8.9.50.611', - version: 'A8.9.50.611', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/1999 Applied Practice Macbeth Answer Key Zip.md b/spaces/tioseFevbu/cartoon-converter/scripts/1999 Applied Practice Macbeth Answer Key Zip.md deleted file mode 100644 index 6de90202579ba86921d11068c9c92585875b6269..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/1999 Applied Practice Macbeth Answer Key Zip.md +++ /dev/null @@ -1,27 +0,0 @@ - -

          How to Ace the 1999 Applied Practice Macbeth Test

          -

          If you are preparing for the 1999 Applied Practice Macbeth test, you might be wondering how to study effectively and get a high score. The test consists of selections from Macbeth by William Shakespeare and questions on their content, form, and style. You will need to demonstrate your understanding of the literary devices, themes, and characters in the play, as well as your ability to analyze and interpret the text.

          -

          In this article, we will provide you with some tips and resources to help you ace the test. We will also share with you a zip file that contains the answer key and explanations for the multiple-choice and free-response questions. This zip file is based on the official Applied Practice resource guide, which you can find here.

          -

          1999 Applied Practice Macbeth Answer Key Zip


          Download Zip >>>>> https://urlcod.com/2uHwTI



          -

          Tips for Studying for the 1999 Applied Practice Macbeth Test

          -

          Here are some tips that can help you study for the test:

          -
            -
          • Read the play carefully and annotate it. Pay attention to the language, imagery, symbolism, and structure of the play. Identify the main events, conflicts, and themes in each act and scene.
          • -
          • Review the vocabulary list for Macbeth. The list includes words from the literary passages and from the questions and answers. You can find the list here. You can also use flashcards or online tools to memorize and practice the words.
          • -
          • Practice answering multiple-choice and free-response questions. You can use the questions from the Applied Practice resource guide or from other sources. Try to answer them within the time limit and check your answers with the answer key.
          • -
          • Understand the scoring guide for the free-response questions. The scoring guide explains how your essay will be evaluated based on your thesis, evidence, analysis, organization, and style. You can find the scoring guide here.
          • -
          • Write practice essays on different topics related to Macbeth. You can use the prompts from the Applied Practice resource guide or from other sources. Try to write clear, coherent, and persuasive essays that demonstrate your understanding of the play and your skills in literary analysis.
          • -
          -

          Download the 1999 Applied Practice Macbeth Answer Key Zip File

          -

          If you want to check your answers and improve your performance on the test, you can download the 1999 Applied Practice Macbeth Answer Key Zip File here. The zip file contains:

          -

          -
            -
          • The multiple-choice answer key and explanations for each question.
          • -
          • The free-response scoring guide and sample essays for each prompt.
          • -
          • The glossary of literary terms that are relevant to Macbeth.
          • -
          -

          The zip file is a valuable resource that can help you prepare for the test and boost your confidence. However, it is not a substitute for reading and studying the play itself. You should use it as a supplement to your own learning and analysis.

          -

          Conclusion

          -

          The 1999 Applied Practice Macbeth test is a challenging but rewarding assessment that can help you develop your critical thinking and writing skills. By following our tips and using our resources, you can study effectively and get a high score on the test. We hope this article has been helpful and we wish you good luck on your test!

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/search_scope.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/search_scope.py deleted file mode 100644 index e4e54c2f4c696407c6de380d44d790412b2d4ee5..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/search_scope.py +++ /dev/null @@ -1,129 +0,0 @@ -import itertools -import logging -import os -import posixpath -import urllib.parse -from typing import List - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.models.index import PyPI -from pip._internal.utils.compat import has_tls -from pip._internal.utils.misc import normalize_path, redact_auth_from_url - -logger = logging.getLogger(__name__) - - -class SearchScope: - - """ - Encapsulates the locations that pip is configured to search. - """ - - __slots__ = ["find_links", "index_urls"] - - @classmethod - def create( - cls, - find_links: List[str], - index_urls: List[str], - ) -> "SearchScope": - """ - Create a SearchScope object after normalizing the `find_links`. - """ - # Build find_links. If an argument starts with ~, it may be - # a local file relative to a home directory. So try normalizing - # it and if it exists, use the normalized version. - # This is deliberately conservative - it might be fine just to - # blindly normalize anything starting with a ~... - built_find_links: List[str] = [] - for link in find_links: - if link.startswith("~"): - new_link = normalize_path(link) - if os.path.exists(new_link): - link = new_link - built_find_links.append(link) - - # If we don't have TLS enabled, then WARN if anyplace we're looking - # relies on TLS. - if not has_tls(): - for link in itertools.chain(index_urls, built_find_links): - parsed = urllib.parse.urlparse(link) - if parsed.scheme == "https": - logger.warning( - "pip is configured with locations that require " - "TLS/SSL, however the ssl module in Python is not " - "available." - ) - break - - return cls( - find_links=built_find_links, - index_urls=index_urls, - ) - - def __init__( - self, - find_links: List[str], - index_urls: List[str], - ) -> None: - self.find_links = find_links - self.index_urls = index_urls - - def get_formatted_locations(self) -> str: - lines = [] - redacted_index_urls = [] - if self.index_urls and self.index_urls != [PyPI.simple_url]: - for url in self.index_urls: - - redacted_index_url = redact_auth_from_url(url) - - # Parse the URL - purl = urllib.parse.urlsplit(redacted_index_url) - - # URL is generally invalid if scheme and netloc is missing - # there are issues with Python and URL parsing, so this test - # is a bit crude. See bpo-20271, bpo-23505. Python doesn't - # always parse invalid URLs correctly - it should raise - # exceptions for malformed URLs - if not purl.scheme and not purl.netloc: - logger.warning( - 'The index url "%s" seems invalid, please provide a scheme.', - redacted_index_url, - ) - - redacted_index_urls.append(redacted_index_url) - - lines.append( - "Looking in indexes: {}".format(", ".join(redacted_index_urls)) - ) - - if self.find_links: - lines.append( - "Looking in links: {}".format( - ", ".join(redact_auth_from_url(url) for url in self.find_links) - ) - ) - return "\n".join(lines) - - def get_index_urls_locations(self, project_name: str) -> List[str]: - """Returns the locations found via self.index_urls - - Checks the url_name on the main (first in the list) index and - use this url_name to produce all locations - """ - - def mkurl_pypi_url(url: str) -> str: - loc = posixpath.join( - url, urllib.parse.quote(canonicalize_name(project_name)) - ) - # For maximum compatibility with easy_install, ensure the path - # ends in a trailing slash. Although this isn't in the spec - # (and PyPI can handle it without the slash) some other index - # implementations might break if they relied on easy_install's - # behavior. - if not loc.endswith("/"): - loc = loc + "/" - return loc - - return [mkurl_pypi_url(url) for url in self.index_urls] diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/_structures.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/_structures.py deleted file mode 100644 index 90a6465f9682c886363eea5327dac64bf623a6ff..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/_structures.py +++ /dev/null @@ -1,61 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - - -class InfinityType: - def __repr__(self) -> str: - return "Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return False - - def __le__(self, other: object) -> bool: - return False - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return True - - def __ge__(self, other: object) -> bool: - return True - - def __neg__(self: object) -> "NegativeInfinityType": - return NegativeInfinity - - -Infinity = InfinityType() - - -class NegativeInfinityType: - def __repr__(self) -> str: - return "-Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return True - - def __le__(self, other: object) -> bool: - return True - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return False - - def __ge__(self, other: object) -> bool: - return False - - def __neg__(self: object) -> InfinityType: - return Infinity - - -NegativeInfinity = NegativeInfinityType() diff --git a/spaces/tomaseo2022/Mejorar-Resolucion-Imagen/predict.py b/spaces/tomaseo2022/Mejorar-Resolucion-Imagen/predict.py deleted file mode 100644 index 4a84ec6e4a73fccdde30aed268e7bb1aef42b0ba..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/Mejorar-Resolucion-Imagen/predict.py +++ /dev/null @@ -1,158 +0,0 @@ -import cog -import tempfile -from pathlib import Path -import argparse -import shutil -import os -import cv2 -import glob -import torch -from collections import OrderedDict -import numpy as np -from main_test_swinir import define_model, setup, get_image_pair - - -class Predictor(cog.Predictor): - def setup(self): - model_dir = 'experiments/pretrained_models' - - self.model_zoo = { - 'real_sr': { - 4: os.path.join(model_dir, '003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth') - }, - 'gray_dn': { - 15: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth'), - 25: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth'), - 50: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth') - }, - 'color_dn': { - 15: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth'), - 25: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth'), - 50: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth') - }, - 'jpeg_car': { - 10: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth'), - 20: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth'), - 30: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth'), - 40: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth') - } - } - - parser = argparse.ArgumentParser() - parser.add_argument('--task', type=str, default='real_sr', help='classical_sr, lightweight_sr, real_sr, ' - 'gray_dn, color_dn, jpeg_car') - parser.add_argument('--scale', type=int, default=1, help='scale factor: 1, 2, 3, 4, 8') # 1 for dn and jpeg car - parser.add_argument('--noise', type=int, default=15, help='noise level: 15, 25, 50') - parser.add_argument('--jpeg', type=int, default=40, help='scale factor: 10, 20, 30, 40') - parser.add_argument('--training_patch_size', type=int, default=128, help='patch size used in training SwinIR. ' - 'Just used to differentiate two different settings in Table 2 of the paper. ' - 'Images are NOT tested patch by patch.') - parser.add_argument('--large_model', action='store_true', - help='use large model, only provided for real image sr') - parser.add_argument('--model_path', type=str, - default=self.model_zoo['real_sr'][4]) - parser.add_argument('--folder_lq', type=str, default=None, help='input low-quality test image folder') - parser.add_argument('--folder_gt', type=str, default=None, help='input ground-truth test image folder') - - self.args = parser.parse_args('') - - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - self.tasks = { - 'Real-World Image Super-Resolution': 'real_sr', - 'Grayscale Image Denoising': 'gray_dn', - 'Color Image Denoising': 'color_dn', - 'JPEG Compression Artifact Reduction': 'jpeg_car' - } - - @cog.input("image", type=Path, help="input image") - @cog.input("task_type", type=str, default='Real-World Image Super-Resolution', - options=['Real-World Image Super-Resolution', 'Grayscale Image Denoising', 'Color Image Denoising', - 'JPEG Compression Artifact Reduction'], - help="image restoration task type") - @cog.input("noise", type=int, default=15, options=[15, 25, 50], - help='noise level, activated for Grayscale Image Denoising and Color Image Denoising. ' - 'Leave it as default or arbitrary if other tasks are selected') - @cog.input("jpeg", type=int, default=40, options=[10, 20, 30, 40], - help='scale factor, activated for JPEG Compression Artifact Reduction. ' - 'Leave it as default or arbitrary if other tasks are selected') - def predict(self, image, task_type='Real-World Image Super-Resolution', jpeg=40, noise=15): - - self.args.task = self.tasks[task_type] - self.args.noise = noise - self.args.jpeg = jpeg - - # set model path - if self.args.task == 'real_sr': - self.args.scale = 4 - self.args.model_path = self.model_zoo[self.args.task][4] - elif self.args.task in ['gray_dn', 'color_dn']: - self.args.model_path = self.model_zoo[self.args.task][noise] - else: - self.args.model_path = self.model_zoo[self.args.task][jpeg] - - # set input folder - input_dir = 'input_cog_temp' - os.makedirs(input_dir, exist_ok=True) - input_path = os.path.join(input_dir, os.path.basename(image)) - shutil.copy(str(image), input_path) - if self.args.task == 'real_sr': - self.args.folder_lq = input_dir - else: - self.args.folder_gt = input_dir - - model = define_model(self.args) - model.eval() - model = model.to(self.device) - - # setup folder and path - folder, save_dir, border, window_size = setup(self.args) - os.makedirs(save_dir, exist_ok=True) - test_results = OrderedDict() - test_results['psnr'] = [] - test_results['ssim'] = [] - test_results['psnr_y'] = [] - test_results['ssim_y'] = [] - test_results['psnr_b'] = [] - # psnr, ssim, psnr_y, ssim_y, psnr_b = 0, 0, 0, 0, 0 - out_path = Path(tempfile.mkdtemp()) / "out.png" - - for idx, path in enumerate(sorted(glob.glob(os.path.join(folder, '*')))): - # read image - imgname, img_lq, img_gt = get_image_pair(self.args, path) # image to HWC-BGR, float32 - img_lq = np.transpose(img_lq if img_lq.shape[2] == 1 else img_lq[:, :, [2, 1, 0]], - (2, 0, 1)) # HCW-BGR to CHW-RGB - img_lq = torch.from_numpy(img_lq).float().unsqueeze(0).to(self.device) # CHW-RGB to NCHW-RGB - - # inference - with torch.no_grad(): - # pad input image to be a multiple of window_size - _, _, h_old, w_old = img_lq.size() - h_pad = (h_old // window_size + 1) * window_size - h_old - w_pad = (w_old // window_size + 1) * window_size - w_old - img_lq = torch.cat([img_lq, torch.flip(img_lq, [2])], 2)[:, :, :h_old + h_pad, :] - img_lq = torch.cat([img_lq, torch.flip(img_lq, [3])], 3)[:, :, :, :w_old + w_pad] - output = model(img_lq) - output = output[..., :h_old * self.args.scale, :w_old * self.args.scale] - - # save image - output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy() - if output.ndim == 3: - output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0)) # CHW-RGB to HCW-BGR - output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 - cv2.imwrite(str(out_path), output) - - clean_folder(input_dir) - return out_path - - -def clean_folder(folder): - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - elif os.path.isdir(file_path): - shutil.rmtree(file_path) - except Exception as e: - print('Failed to delete %s. Reason: %s' % (file_path, e)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index 1fbe6ce9f8a91151f2dfb656e90c9586b6dd35e3..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fsaf/fsaf_r101_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fsaf/fsaf_r101_fpn_1x_coco.py deleted file mode 100644 index 95a7ae2de598f5c89ddf8f0f82be653aa85bd3e6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fsaf/fsaf_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fsaf_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/trttung1610/musicgen/audiocraft/utils/profiler.py b/spaces/trttung1610/musicgen/audiocraft/utils/profiler.py deleted file mode 100644 index b45b6d15910b50305c7b212c089ffad3c25b324d..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/utils/profiler.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import typing as tp - -import dora -import torch - - -logger = logging.getLogger(__name__) - - -class Profiler: - """Context manager wrapper for xformers profiler. - """ - def __init__(self, module: torch.nn.Module, enabled: bool = False): - self.profiler: tp.Optional[tp.Any] = None - if enabled: - from xformers.profiler import profile - output_dir = dora.get_xp().folder / 'profiler_data' - logger.info("Profiling activated, results with be saved to %s", output_dir) - self.profiler = profile(output_dir=output_dir, module=module) - - def step(self): - if self.profiler is not None: - self.profiler.step() # type: ignore - - def __enter__(self): - if self.profiler is not None: - return self.profiler.__enter__() # type: ignore - - def __exit__(self, exc_type, exc_value, exc_tb): - if self.profiler is not None: - return self.profiler.__exit__(exc_type, exc_value, exc_tb) # type: ignore diff --git a/spaces/user238921933/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py b/spaces/user238921933/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py deleted file mode 100644 index e8783bca153954afd086536a6dee854ec5e17ba9..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions-builtin/SwinIR/scripts/swinir_model.py +++ /dev/null @@ -1,178 +0,0 @@ -import contextlib -import os - -import numpy as np -import torch -from PIL import Image -from basicsr.utils.download_util import load_file_from_url -from tqdm import tqdm - -from modules import modelloader, devices, script_callbacks, shared -from modules.shared import cmd_opts, opts, state -from swinir_model_arch import SwinIR as net -from swinir_model_arch_v2 import Swin2SR as net2 -from modules.upscaler import Upscaler, UpscalerData - - -device_swinir = devices.get_device_for('swinir') - - -class UpscalerSwinIR(Upscaler): - def __init__(self, dirname): - self.name = "SwinIR" - self.model_url = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0" \ - "/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \ - "-L_x4_GAN.pth " - self.model_name = "SwinIR 4x" - self.user_path = dirname - super().__init__() - scalers = [] - model_files = self.find_models(ext_filter=[".pt", ".pth"]) - for model in model_files: - if "http" in model: - name = self.model_name - else: - name = modelloader.friendly_name(model) - model_data = UpscalerData(name, model, self) - scalers.append(model_data) - self.scalers = scalers - - def do_upscale(self, img, model_file): - model = self.load_model(model_file) - if model is None: - return img - model = model.to(device_swinir, dtype=devices.dtype) - img = upscale(img, model) - try: - torch.cuda.empty_cache() - except: - pass - return img - - def load_model(self, path, scale=4): - if "http" in path: - dl_name = "%s%s" % (self.model_name.replace(" ", "_"), ".pth") - filename = load_file_from_url(url=path, model_dir=self.model_path, file_name=dl_name, progress=True) - else: - filename = path - if filename is None or not os.path.exists(filename): - return None - if filename.endswith(".v2.pth"): - model = net2( - upscale=scale, - in_chans=3, - img_size=64, - window_size=8, - img_range=1.0, - depths=[6, 6, 6, 6, 6, 6], - embed_dim=180, - num_heads=[6, 6, 6, 6, 6, 6], - mlp_ratio=2, - upsampler="nearest+conv", - resi_connection="1conv", - ) - params = None - else: - model = net( - upscale=scale, - in_chans=3, - img_size=64, - window_size=8, - img_range=1.0, - depths=[6, 6, 6, 6, 6, 6, 6, 6, 6], - embed_dim=240, - num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8], - mlp_ratio=2, - upsampler="nearest+conv", - resi_connection="3conv", - ) - params = "params_ema" - - pretrained_model = torch.load(filename) - if params is not None: - model.load_state_dict(pretrained_model[params], strict=True) - else: - model.load_state_dict(pretrained_model, strict=True) - return model - - -def upscale( - img, - model, - tile=None, - tile_overlap=None, - window_size=8, - scale=4, -): - tile = tile or opts.SWIN_tile - tile_overlap = tile_overlap or opts.SWIN_tile_overlap - - - img = np.array(img) - img = img[:, :, ::-1] - img = np.moveaxis(img, 2, 0) / 255 - img = torch.from_numpy(img).float() - img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype) - with torch.no_grad(), devices.autocast(): - _, _, h_old, w_old = img.size() - h_pad = (h_old // window_size + 1) * window_size - h_old - w_pad = (w_old // window_size + 1) * window_size - w_old - img = torch.cat([img, torch.flip(img, [2])], 2)[:, :, : h_old + h_pad, :] - img = torch.cat([img, torch.flip(img, [3])], 3)[:, :, :, : w_old + w_pad] - output = inference(img, model, tile, tile_overlap, window_size, scale) - output = output[..., : h_old * scale, : w_old * scale] - output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy() - if output.ndim == 3: - output = np.transpose( - output[[2, 1, 0], :, :], (1, 2, 0) - ) # CHW-RGB to HCW-BGR - output = (output * 255.0).round().astype(np.uint8) # float32 to uint8 - return Image.fromarray(output, "RGB") - - -def inference(img, model, tile, tile_overlap, window_size, scale): - # test the image tile by tile - b, c, h, w = img.size() - tile = min(tile, h, w) - assert tile % window_size == 0, "tile size should be a multiple of window_size" - sf = scale - - stride = tile - tile_overlap - h_idx_list = list(range(0, h - tile, stride)) + [h - tile] - w_idx_list = list(range(0, w - tile, stride)) + [w - tile] - E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img) - W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir) - - with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar: - for h_idx in h_idx_list: - if state.interrupted or state.skipped: - break - - for w_idx in w_idx_list: - if state.interrupted or state.skipped: - break - - in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile] - out_patch = model(in_patch) - out_patch_mask = torch.ones_like(out_patch) - - E[ - ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf - ].add_(out_patch) - W[ - ..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf - ].add_(out_patch_mask) - pbar.update(1) - output = E.div_(W) - - return output - - -def on_ui_settings(): - import gradio as gr - - shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling"))) - shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling"))) - - -script_callbacks.on_ui_settings(on_ui_settings) diff --git a/spaces/user238921933/stable-diffusion-webui/javascript/extraNetworks.js b/spaces/user238921933/stable-diffusion-webui/javascript/extraNetworks.js deleted file mode 100644 index b15758b91ebd271604ac8ad342a19537b2efc760..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/javascript/extraNetworks.js +++ /dev/null @@ -1,107 +0,0 @@ - -function setupExtraNetworksForTab(tabname){ - gradioApp().querySelector('#'+tabname+'_extra_tabs').classList.add('extra-networks') - - var tabs = gradioApp().querySelector('#'+tabname+'_extra_tabs > div') - var search = gradioApp().querySelector('#'+tabname+'_extra_search textarea') - var refresh = gradioApp().getElementById(tabname+'_extra_refresh') - var close = gradioApp().getElementById(tabname+'_extra_close') - - search.classList.add('search') - tabs.appendChild(search) - tabs.appendChild(refresh) - tabs.appendChild(close) - - search.addEventListener("input", function(evt){ - searchTerm = search.value.toLowerCase() - - gradioApp().querySelectorAll('#'+tabname+'_extra_tabs div.card').forEach(function(elem){ - text = elem.querySelector('.name').textContent.toLowerCase() + " " + elem.querySelector('.search_term').textContent.toLowerCase() - elem.style.display = text.indexOf(searchTerm) == -1 ? "none" : "" - }) - }); -} - -var activePromptTextarea = {}; - -function setupExtraNetworks(){ - setupExtraNetworksForTab('txt2img') - setupExtraNetworksForTab('img2img') - - function registerPrompt(tabname, id){ - var textarea = gradioApp().querySelector("#" + id + " > label > textarea"); - - if (! activePromptTextarea[tabname]){ - activePromptTextarea[tabname] = textarea - } - - textarea.addEventListener("focus", function(){ - activePromptTextarea[tabname] = textarea; - }); - } - - registerPrompt('txt2img', 'txt2img_prompt') - registerPrompt('txt2img', 'txt2img_neg_prompt') - registerPrompt('img2img', 'img2img_prompt') - registerPrompt('img2img', 'img2img_neg_prompt') -} - -onUiLoaded(setupExtraNetworks) - -var re_extranet = /<([^:]+:[^:]+):[\d\.]+>/; -var re_extranet_g = /\s+<([^:]+:[^:]+):[\d\.]+>/g; - -function tryToRemoveExtraNetworkFromPrompt(textarea, text){ - var m = text.match(re_extranet) - if(! m) return false - - var partToSearch = m[1] - var replaced = false - var newTextareaText = textarea.value.replaceAll(re_extranet_g, function(found, index){ - m = found.match(re_extranet); - if(m[1] == partToSearch){ - replaced = true; - return "" - } - return found; - }) - - if(replaced){ - textarea.value = newTextareaText - return true; - } - - return false -} - -function cardClicked(tabname, textToAdd, allowNegativePrompt){ - var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea") - - if(! tryToRemoveExtraNetworkFromPrompt(textarea, textToAdd)){ - textarea.value = textarea.value + " " + textToAdd - } - - updateInput(textarea) -} - -function saveCardPreview(event, tabname, filename){ - var textarea = gradioApp().querySelector("#" + tabname + '_preview_filename > label > textarea') - var button = gradioApp().getElementById(tabname + '_save_preview') - - textarea.value = filename - updateInput(textarea) - - button.click() - - event.stopPropagation() - event.preventDefault() -} - -function extraNetworksSearchButton(tabs_id, event){ - searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > div > textarea') - button = event.target - text = button.classList.contains("search-all") ? "" : button.textContent.trim() - - searchTextarea.value = text - updateInput(searchTextarea) -} \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/test/basic_features/img2img_test.py b/spaces/user238921933/stable-diffusion-webui/test/basic_features/img2img_test.py deleted file mode 100644 index 08c5c903e8382ef4b969b01da87bc69fb06ff2b4..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/test/basic_features/img2img_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import unittest -import requests -from gradio.processing_utils import encode_pil_to_base64 -from PIL import Image - - -class TestImg2ImgWorking(unittest.TestCase): - def setUp(self): - self.url_img2img = "http://localhost:7860/sdapi/v1/img2img" - self.simple_img2img = { - "init_images": [encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))], - "resize_mode": 0, - "denoising_strength": 0.75, - "mask": None, - "mask_blur": 4, - "inpainting_fill": 0, - "inpaint_full_res": False, - "inpaint_full_res_padding": 0, - "inpainting_mask_invert": False, - "prompt": "example prompt", - "styles": [], - "seed": -1, - "subseed": -1, - "subseed_strength": 0, - "seed_resize_from_h": -1, - "seed_resize_from_w": -1, - "batch_size": 1, - "n_iter": 1, - "steps": 3, - "cfg_scale": 7, - "width": 64, - "height": 64, - "restore_faces": False, - "tiling": False, - "negative_prompt": "", - "eta": 0, - "s_churn": 0, - "s_tmax": 0, - "s_tmin": 0, - "s_noise": 1, - "override_settings": {}, - "sampler_index": "Euler a", - "include_init_images": False - } - - def test_img2img_simple_performed(self): - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_inpainting_masked_performed(self): - self.simple_img2img["mask"] = encode_pil_to_base64(Image.open(r"test/test_files/mask_basic.png")) - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_inpainting_with_inverted_masked_performed(self): - self.simple_img2img["mask"] = encode_pil_to_base64(Image.open(r"test/test_files/mask_basic.png")) - self.simple_img2img["inpainting_mask_invert"] = True - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_img2img_sd_upscale_performed(self): - self.simple_img2img["script_name"] = "sd upscale" - self.simple_img2img["script_args"] = ["", 8, "Lanczos", 2.0] - - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/autosize.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/autosize.py deleted file mode 100644 index ef3364454022edc85ae9b856aeaf8aaf4bc187b2..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/autosize.py +++ /dev/null @@ -1,94 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from copy import deepcopy -from typing import Tuple - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - - -class ResizeLongestSide: - """ - Resizes images to the longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape(original_size[0], original_size[1], self.target_length) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[2], image.shape[3], self.target_length) - return F.interpolate(image, target_size, mode='bilinear', align_corners=False, antialias=True) - - def apply_coords_torch(self, coords: torch.Tensor, original_size: Tuple[int, ...]) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape(original_size[0], original_size[1], self.target_length) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch(self, boxes: torch.Tensor, original_size: Tuple[int, ...]) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/victorisgeek/SwapFace2Pon/face_parsing/__init__.py b/spaces/victorisgeek/SwapFace2Pon/face_parsing/__init__.py deleted file mode 100644 index 6497208d246c99110b0e75d01bc05ea7afc1415f..0000000000000000000000000000000000000000 --- a/spaces/victorisgeek/SwapFace2Pon/face_parsing/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .swap import init_parser, swap_regions, mask_regions, mask_regions_to_list -from .model import BiSeNet -from .parse_mask import init_parsing_model, get_parsed_mask, SoftErosion \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/README.md b/spaces/vonbarnekowa/stable-diffusion/README.md deleted file mode 100644 index 8f5a097ba8d46865b0ed6ca050e971f8a8ea7828..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stable Diffusion 2 -emoji: 🔥 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: stabilityai/stable-diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net.py b/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/design_filenames.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/design_filenames.py deleted file mode 100644 index 6c3d8e803bab6e7576057ce784edfcf0ee80f5c5..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/design_filenames.py +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/19 11:50 -@Author : alexanderwu -@File : design_filenames.py -""" -from metagpt.actions import Action -from metagpt.logs import logger - -PROMPT = """You are an AI developer, trying to write a program that generates code for users based on their intentions. -When given their intentions, provide a complete and exhaustive list of file paths needed to write the program for the user. -Only list the file paths you will write and return them as a Python string list. -Do not add any other explanations, just return a Python string list.""" - - -class DesignFilenames(Action): - def __init__(self, name, context=None, llm=None): - super().__init__(name, context, llm) - self.desc = "Based on the PRD, consider system design, and carry out the basic design of the corresponding " \ - "APIs, data structures, and database tables. Please give your design, feedback clearly and in detail." - - async def run(self, prd): - prompt = f"The following is the Product Requirement Document (PRD):\n\n{prd}\n\n{PROMPT}" - design_filenames = await self._aask(prompt) - logger.debug(prompt) - logger.debug(design_filenames) - return design_filenames diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/summarize.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/summarize.py deleted file mode 100644 index c3deef569010cace27c017fbb8a32f415dd76a0a..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/summarize.py +++ /dev/null @@ -1,93 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/19 23:07 -@Author : alexanderwu -@File : summarize.py -""" - - -# 出自插件:ChatGPT - 网站和 YouTube 视频摘要 -# https://chrome.google.com/webstore/detail/chatgpt-%C2%BB-summarize-every/cbgecfllfhmmnknmamkejadjmnmpfjmp?hl=zh-CN&utm_source=chrome-ntp-launcher -SUMMARIZE_PROMPT = """ -Your output should use the following template: -### Summary -### Facts -- [Emoji] Bulletpoint - -Your task is to summarize the text I give you in up to seven concise bullet points and start with a short, high-quality -summary. Pick a suitable emoji for every bullet point. Your response should be in {{SELECTED_LANGUAGE}}. If the provided - URL is functional and not a YouTube video, use the text from the {{URL}}. However, if the URL is not functional or is -a YouTube video, use the following text: {{CONTENT}}. -""" - - -# GCP-VertexAI-文本摘要(SUMMARIZE_PROMPT_2-5都是) -# https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/examples/prompt-design/text_summarization.ipynb -# 长文档需要map-reduce过程,见下面这个notebook -# https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/examples/document-summarization/summarization_large_documents.ipynb -SUMMARIZE_PROMPT_2 = """ -Provide a very short summary, no more than three sentences, for the following article: - -Our quantum computers work by manipulating qubits in an orchestrated fashion that we call quantum algorithms. -The challenge is that qubits are so sensitive that even stray light can cause calculation errors — and the problem worsens as quantum computers grow. -This has significant consequences, since the best quantum algorithms that we know for running useful applications require the error rates of our qubits to be far lower than we have today. -To bridge this gap, we will need quantum error correction. -Quantum error correction protects information by encoding it across multiple physical qubits to form a “logical qubit,” and is believed to be the only way to produce a large-scale quantum computer with error rates low enough for useful calculations. -Instead of computing on the individual qubits themselves, we will then compute on logical qubits. By encoding larger numbers of physical qubits on our quantum processor into one logical qubit, we hope to reduce the error rates to enable useful quantum algorithms. - -Summary: - -""" - - -SUMMARIZE_PROMPT_3 = """ -Provide a TL;DR for the following article: - -Our quantum computers work by manipulating qubits in an orchestrated fashion that we call quantum algorithms. -The challenge is that qubits are so sensitive that even stray light can cause calculation errors — and the problem worsens as quantum computers grow. -This has significant consequences, since the best quantum algorithms that we know for running useful applications require the error rates of our qubits to be far lower than we have today. -To bridge this gap, we will need quantum error correction. -Quantum error correction protects information by encoding it across multiple physical qubits to form a “logical qubit,” and is believed to be the only way to produce a large-scale quantum computer with error rates low enough for useful calculations. -Instead of computing on the individual qubits themselves, we will then compute on logical qubits. By encoding larger numbers of physical qubits on our quantum processor into one logical qubit, we hope to reduce the error rates to enable useful quantum algorithms. - -TL;DR: -""" - - -SUMMARIZE_PROMPT_4 = """ -Provide a very short summary in four bullet points for the following article: - -Our quantum computers work by manipulating qubits in an orchestrated fashion that we call quantum algorithms. -The challenge is that qubits are so sensitive that even stray light can cause calculation errors — and the problem worsens as quantum computers grow. -This has significant consequences, since the best quantum algorithms that we know for running useful applications require the error rates of our qubits to be far lower than we have today. -To bridge this gap, we will need quantum error correction. -Quantum error correction protects information by encoding it across multiple physical qubits to form a “logical qubit,” and is believed to be the only way to produce a large-scale quantum computer with error rates low enough for useful calculations. -Instead of computing on the individual qubits themselves, we will then compute on logical qubits. By encoding larger numbers of physical qubits on our quantum processor into one logical qubit, we hope to reduce the error rates to enable useful quantum algorithms. - -Bulletpoints: - -""" - - -SUMMARIZE_PROMPT_5 = """ -Please generate a summary of the following conversation and at the end summarize the to-do's for the support Agent: - -Customer: Hi, I'm Larry, and I received the wrong item. - -Support Agent: Hi, Larry. How would you like to see this resolved? - -Customer: That's alright. I want to return the item and get a refund, please. - -Support Agent: Of course. I can process the refund for you now. Can I have your order number, please? - -Customer: It's [ORDER NUMBER]. - -Support Agent: Thank you. I've processed the refund, and you will receive your money back within 14 days. - -Customer: Thank you very much. - -Support Agent: You're welcome, Larry. Have a good day! - -Summary: -""" diff --git a/spaces/wffcyrus/MetaGPT-v1/setup.py b/spaces/wffcyrus/MetaGPT-v1/setup.py deleted file mode 100644 index a88f9de92b3794144a0fee383206ef7de77f0554..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/setup.py +++ /dev/null @@ -1,54 +0,0 @@ -"""wutils: handy tools -""" -import subprocess -from codecs import open -from os import path - -from setuptools import Command, find_packages, setup - - -class InstallMermaidCLI(Command): - """A custom command to run `npm install -g @mermaid-js/mermaid-cli` via a subprocess.""" - - description = "install mermaid-cli" - user_options = [] - - def run(self): - try: - subprocess.check_call(["npm", "install", "-g", "@mermaid-js/mermaid-cli"]) - except subprocess.CalledProcessError as e: - print(f"Error occurred: {e.output}") - - -here = path.abspath(path.dirname(__file__)) - -with open(path.join(here, "README.md"), encoding="utf-8") as f: - long_description = f.read() - -with open(path.join(here, "requirements.txt"), encoding="utf-8") as f: - requirements = [line.strip() for line in f if line] - -setup( - name="metagpt", - version="0.1", - description="The Multi-Role Meta Programming Framework", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://gitlab.deepwisdomai.com/pub/metagpt", - author="Alexander Wu", - author_email="alexanderwu@fuzhi.ai", - license="Apache 2.0", - keywords="metagpt multi-role multi-agent programming gpt llm", - packages=find_packages(exclude=["contrib", "docs", "examples"]), - python_requires=">=3.9", - install_requires=requirements, - extras_require={ - "playwright": ["playwright>=1.26", "beautifulsoup4"], - "selenium": ["selenium>4", "webdriver_manager", "beautifulsoup4"], - "search-google": ["google-api-python-client==2.94.0"], - "search-ddg": ["duckduckgo-search==3.8.5"], - }, - cmdclass={ - "install_mermaid": InstallMermaidCLI, - }, -) diff --git a/spaces/xdecoder/Demo/tasks/open_inst.py b/spaces/xdecoder/Demo/tasks/open_inst.py deleted file mode 100644 index 1cf1686a0b20c8f54aca9a308afef7cf6dfed166..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/tasks/open_inst.py +++ /dev/null @@ -1,60 +0,0 @@ -# -------------------------------------------------------- -# X-Decoder -- Generalized Decoding for Pixel, Image, and Language -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Xueyan Zou (xueyan@cs.wisc.edu) -# -------------------------------------------------------- - -import torch -import numpy as np -from PIL import Image -from torchvision import transforms -from utils.visualizer import Visualizer -from detectron2.utils.colormap import random_color -from detectron2.data import MetadataCatalog -from detectron2.structures import BitMasks - - -t = [] -t.append(transforms.Resize(512, interpolation=Image.BICUBIC)) -transform = transforms.Compose(t) -metadata = MetadataCatalog.get('ade20k_panoptic_train') - -def open_instseg(model, image, texts, inpainting_text, *args, **kwargs): - thing_classes = [x.strip() for x in texts.split(',')] - thing_colors = [random_color(rgb=True, maximum=255).astype(np.int32).tolist() for _ in range(len(thing_classes))] - thing_dataset_id_to_contiguous_id = {x:x for x in range(len(thing_classes))} - - MetadataCatalog.get("demo").set( - thing_colors=thing_colors, - thing_classes=thing_classes, - thing_dataset_id_to_contiguous_id=thing_dataset_id_to_contiguous_id, - ) - - with torch.no_grad(): - model.model.sem_seg_head.predictor.lang_encoder.get_text_embeddings(thing_classes + ["background"], is_eval=True) - - metadata = MetadataCatalog.get('demo') - model.model.metadata = metadata - model.model.sem_seg_head.num_classes = len(thing_classes) - - image_ori = transform(image) - width = image_ori.size[0] - height = image_ori.size[1] - image = np.asarray(image_ori) - images = torch.from_numpy(image.copy()).permute(2,0,1).cuda() - - batch_inputs = [{'image': images, 'height': height, 'width': width}] - outputs = model.forward(batch_inputs) - visual = Visualizer(image_ori, metadata=metadata) - - inst_seg = outputs[-1]['instances'] - inst_seg.pred_masks = inst_seg.pred_masks.cpu() - inst_seg.pred_boxes = BitMasks(inst_seg.pred_masks > 0).get_bounding_boxes() - demo = visual.draw_instance_predictions(inst_seg) # rgb Image - res = demo.get_image() - - - MetadataCatalog.remove('demo') - torch.cuda.empty_cache() - return Image.fromarray(res), '', None diff --git a/spaces/xdecoder/Instruct-X-Decoder/tasks/ref_cap.py b/spaces/xdecoder/Instruct-X-Decoder/tasks/ref_cap.py deleted file mode 100644 index 76cd1fd34a038db0fd7a8818ff7a7c764bfb040d..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/tasks/ref_cap.py +++ /dev/null @@ -1,68 +0,0 @@ -# -------------------------------------------------------- -# X-Decoder -- Generalized Decoding for Pixel, Image, and Language -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Xueyan Zou (xueyan@cs.wisc.edu) -# -------------------------------------------------------- - -import torch -import torch.nn.functional as F -import numpy as np -from PIL import Image -from torchvision import transforms -from utils.visualizer import Visualizer -from detectron2.data import MetadataCatalog - -t = [] -t.append(transforms.Resize(224, interpolation=Image.BICUBIC)) -transform_ret = transforms.Compose(t) -t = [] -t.append(transforms.Resize(512, interpolation=Image.BICUBIC)) -transform_grd = transforms.Compose(t) - -metedata = MetadataCatalog.get('coco_2017_train_panoptic') - -def referring_captioning(model, image, texts, inpainting_text, *args, **kwargs): - model_last, model_cap = model - with torch.no_grad(): - image_ori = image - image = transform_grd(image) - width = image.size[0] - height = image.size[1] - image = np.asarray(image) - image_ori_ = image - images = torch.from_numpy(image.copy()).permute(2,0,1).cuda() - texts_input = [[texts.strip() if texts.endswith('.') else (texts + '.')]] - - batch_inputs = [{'image': images, 'groundings': {'texts':texts_input}, 'height': height, 'width': width}] - outputs = model_last.model.evaluate_grounding(batch_inputs, None) - - grd_mask = (outputs[-1]['grounding_mask'] > 0).float() - grd_mask_ = (1 - F.interpolate(grd_mask[None,], (224, 224), mode='nearest')[0]).bool() - - color = [252/255, 91/255, 129/255] - visual = Visualizer(image_ori_, metadata=metedata) - demo = visual.draw_binary_mask(grd_mask.cpu().numpy()[0], color=color, text=texts) - res = demo.get_image() - - if (1 - grd_mask_.float()).sum() < 5: - torch.cuda.empty_cache() - return Image.fromarray(res), 'n/a', None - - grd_mask_ = grd_mask_ * 0 - image = transform_ret(image_ori) - image_ori = np.asarray(image_ori) - image = np.asarray(image) - images = torch.from_numpy(image.copy()).permute(2,0,1).cuda() - batch_inputs = [{'image': images, 'image_id': 0, 'captioning_mask': grd_mask_}] - - token_text = texts.replace('.','') if texts.endswith('.') else texts - token = model_cap.model.sem_seg_head.predictor.lang_encoder.tokenizer.encode(token_text) - token = torch.tensor(token)[None,:-1] - - outputs = model_cap.model.evaluate_captioning(batch_inputs, extra={'token': token}) - # outputs = model_cap.model.evaluate_captioning(batch_inputs, extra={}) - text = outputs[-1]['captioning_text'] - - torch.cuda.empty_cache() - return Image.fromarray(res), text, None \ No newline at end of file diff --git a/spaces/xiangdy/chatGPT/modules/pdf_func.py b/spaces/xiangdy/chatGPT/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/yangogo/bingo/src/components/ui/alert-dialog.tsx b/spaces/yangogo/bingo/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
          - {children} -
          -
          -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
          -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/submission/_internal/run.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/submission/_internal/run.py deleted file mode 100644 index 18f830d81ead15fece09382cc30654fb89d14d1b..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/submission/_internal/run.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Helper for launching run functions in computing clusters. - -During the submit process, this file is copied to the appropriate run dir. -When the job is launched in the cluster, this module is the first thing that -is run inside the docker container. -""" - -import os -import pickle -import sys - -# PYTHONPATH should have been set so that the run_dir/src is in it -import dnnlib - -def main(): - if not len(sys.argv) >= 4: - raise RuntimeError("This script needs three arguments: run_dir, task_name and host_name!") - - run_dir = str(sys.argv[1]) - task_name = str(sys.argv[2]) - host_name = str(sys.argv[3]) - - submit_config_path = os.path.join(run_dir, "submit_config.pkl") - - # SubmitConfig should have been pickled to the run dir - if not os.path.exists(submit_config_path): - raise RuntimeError("SubmitConfig pickle file does not exist!") - - submit_config: dnnlib.SubmitConfig = pickle.load(open(submit_config_path, "rb")) - dnnlib.submission.submit.set_user_name_override(submit_config.user_name) - - submit_config.task_name = task_name - submit_config.host_name = host_name - - dnnlib.submission.submit.run_wrapper(submit_config) - -if __name__ == "__main__": - main() diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/transform/ControlCoordTransform.ts b/spaces/yderre-aubay/midi-player-demo/src/common/transform/ControlCoordTransform.ts deleted file mode 100644 index 70e030548e1e74777d11793ba0551ce1f2c4d19c..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/transform/ControlCoordTransform.ts +++ /dev/null @@ -1,72 +0,0 @@ -import { ItemValue } from "../../main/components/ControlPane/LineGraph/LineGraph" -import { IPoint, IRect } from "../geometry" -import { ControlSelection } from "../selection/ControlSelection" - -export class ControlCoordTransform { - private _maxValue: number - private _height: number - private _lineWidth: number - private _pixelsPerTick: number - - constructor( - pixelsPerTick: number, - maxValue: number, - height: number, - lineWidth: number, - ) { - this._pixelsPerTick = pixelsPerTick - this._maxValue = maxValue - this._height = height - this._lineWidth = lineWidth - } - - get maxValue() { - return this._maxValue - } - - getX(tick: number) { - return tick * this._pixelsPerTick - } - - getTicks(pixels: number) { - return Math.floor(pixels / this._pixelsPerTick) - } - - getY(value: number) { - return ( - (1 - value / this._maxValue) * (this._height - this._lineWidth * 2) + - this._lineWidth - ) - } - - getValue(y: number) { - return Math.floor( - (1 - (y - this._lineWidth) / (this._height - this._lineWidth * 2)) * - this._maxValue, - ) - } - - toPosition(tick: number, value: number): IPoint { - return { - x: Math.round(this.getX(tick)), - y: Math.round(this.getY(value)), - } - } - - fromPosition(position: IPoint): ItemValue { - return { - tick: this.getTicks(position.x), - value: this.getValue(position.y), - } - } - - transformSelection(selection: ControlSelection): IRect { - const x = this.getX(selection.fromTick) - return { - x, - y: 0, - width: this.getX(selection.toTick) - x, - height: this._height, - } - } -} diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/actions/selection.ts b/spaces/yderre-aubay/midi-player-demo/src/main/actions/selection.ts deleted file mode 100644 index 8fb98868ca993505b223fd6c5ccd82adb3ca60c5..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/actions/selection.ts +++ /dev/null @@ -1,618 +0,0 @@ -import { min } from "lodash" -import cloneDeep from "lodash/cloneDeep" -import { intersects } from "../../common/geometry" -import { isNotNull, isNotUndefined } from "../../common/helpers/array" -import { - Selection, - clampSelection, - movedSelection, - regularizedSelection, -} from "../../common/selection/Selection" -import { NoteEvent, TrackEvent, isNoteEvent } from "../../common/track" -import { NotePoint, clampNotePoint } from "../../common/transform/NotePoint" -import { - PianoNotesClipboardData, - isPianoNotesClipboardData, -} from "../clipboard/clipboardTypes" -import clipboard from "../services/Clipboard" -import RootStore from "../stores/RootStore" -import { pushHistory } from "./history" -import { transposeNotes } from "./song" - -function eventsInSelection(events: TrackEvent[], selection: Selection) { - const selectionRect = { - x: selection.from.tick, - width: selection.to.tick - selection.from.tick, - y: selection.to.noteNumber, - height: selection.from.noteNumber - selection.to.noteNumber, - } - return events.filter(isNoteEvent).filter((b) => - intersects( - { - x: b.tick, - width: b.duration, - y: b.noteNumber - 1, // Subtract 1 since the pitch is the lower end of the rectangle - height: 1, - }, - selectionRect, - ), - ) -} - -export const resizeSelection = - ({ pianoRollStore }: RootStore) => - (start: NotePoint, end: NotePoint) => { - const selection = regularizedSelection( - start.tick, - start.noteNumber, - end.tick, - end.noteNumber, - ) - - // integer containing the original coordinates. - selection.from.noteNumber = Math.ceil(selection.from.noteNumber) - selection.to.noteNumber = Math.floor(selection.to.noteNumber) - - pianoRollStore.selection = clampSelection(selection) - } - -export const fixSelection = - ({ - pianoRollStore, - pianoRollStore: { selectedTrack, selection }, - }: RootStore) => - (clearRect: boolean = false) => { - if (selectedTrack === undefined || selection === null) { - return - } - - // 選択範囲を確定して選択範囲内のノートを選択状態にする - // Confirm the selection and select the notes in the selection state - pianoRollStore.selectedNoteIds = eventsInSelection( - selectedTrack.events, - selection, - ).map((e) => e.id) - - if (clearRect) { - pianoRollStore.selection = null - } - } - -export const transposeSelection = - (rootStore: RootStore) => (deltaPitch: number) => { - const { - pianoRollStore, - pianoRollStore: { selectedTrackId, selection, selectedNoteIds }, - } = rootStore - - pushHistory(rootStore)() - - if (selection !== null) { - const s = movedSelection(selection, 0, deltaPitch) - pianoRollStore.selection = s - } - - transposeNotes(rootStore)(deltaPitch, { - [selectedTrackId]: selectedNoteIds, - }) - } - -export const moveSelection = (rootStore: RootStore) => (point: NotePoint) => { - const { - pianoRollStore: { selectedTrack, selection, quantizer }, - } = rootStore - - if (selectedTrack === undefined || selection === null) { - return - } - - // ノートと選択範囲を移動 - // Move notes and selection - const quantized = clampNotePoint({ - tick: quantizer.round(point.tick), - noteNumber: Math.round(point.noteNumber), - }) - - const dt = quantized.tick - selection.from.tick - const dn = quantized.noteNumber - selection.from.noteNumber - - const to = { - tick: selection.to.tick + dt, - noteNumber: selection.to.noteNumber + dn, - } - - const clampedTo = clampNotePoint(to) - const limit = { - tick: to.tick - clampedTo.tick, - noteNumber: to.noteNumber - clampedTo.noteNumber, - } - - moveSelectionBy(rootStore)({ - tick: dt - limit.tick, - noteNumber: dn - limit.noteNumber, - }) -} - -export const moveSelectionBy = - ({ - pianoRollStore, - pianoRollStore: { selectedTrack, selection, selectedNoteIds }, - pushHistory, - }: RootStore) => - (delta: NotePoint) => { - if (delta.tick === 0 && delta.noteNumber === 0) { - return - } - - if (selectedTrack === undefined) { - return - } - - pushHistory() - - if (selection !== null) { - const s = movedSelection(selection, delta.tick, delta.noteNumber) - pianoRollStore.selection = s - } - - selectedTrack.updateEvents( - selectedNoteIds - .map((id) => { - const n = selectedTrack.getEventById(id) - if (n == undefined || !isNoteEvent(n)) { - return null - } - const pos = clampNotePoint({ - tick: n.tick + delta.tick, - noteNumber: n.noteNumber + delta.noteNumber, - }) - return { - id, - ...pos, - } - }) - .filter(isNotNull), - ) - } - -export const resizeSelectionLeft = (rootStore: RootStore) => (tick: number) => { - const { - pianoRollStore, - pianoRollStore: { selection, quantizer }, - } = rootStore - - if (selection === null) { - return - } - - // 選択範囲とノートを左方向に伸長・縮小する - // Level and reduce the selection and notes in the left direction - const fromTick = quantizer.round(tick) - const delta = fromTick - selection.from.tick - - // 変形していないときは終了 - // End when not deformed - if (delta === 0) { - return - } - - // 選択領域のサイズがゼロになるときは終了 - // End when the size of selection area becomes zero - if (selection.to.tick - fromTick <= 0 || fromTick < 0) { - return - } - - // 右端を固定して長さを変更 - // Fix the right end and change the length - const s = cloneDeep(selection) - s.from.tick = fromTick - pianoRollStore.selection = s - - resizeNotesInSelectionLeftBy(rootStore)(delta) -} - -export const resizeNotesInSelectionLeftBy = - ({ - pianoRollStore: { selectedNoteIds, selectedTrack }, - pushHistory, - }: RootStore) => - (deltaTick: number) => { - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - pushHistory() - - selectedTrack.updateEvents( - selectedNoteIds - .map((id) => { - const n = selectedTrack.getEventById(id) - if (n == undefined || !isNoteEvent(n)) { - return null - } - const duration = n.duration - deltaTick - const tick = n.tick + deltaTick - if (duration <= 0 || tick < 0) { - // 幅がゼロになる場合は変形しない - // Do not deform if the width is zero - return { id } - } - return { - id, - tick, - duration, - } - }) - .filter(isNotNull), - ) - } - -export const resizeSelectionRight = - (rootStore: RootStore) => (tick: number) => { - const { - pianoRollStore, - pianoRollStore: { selection, quantizer }, - } = rootStore - - if (selection === null) { - return - } - - // 選択範囲とノートを右方向に伸長・縮小する - // Return and reduce the selection and note in the right direction - const toTick = quantizer.round(tick) - const delta = toTick - selection.to.tick - - // 変形していないときは終了 - // End when not deformed - if (delta === 0) { - return - } - - // 選択領域のサイズがゼロになるときは終了 - // End when the size of selection area becomes zero - if (toTick - selection.from.tick <= 0) { - return - } - - // 右端を固定して長さを変更 - // Fix the right end and change the length - const s = cloneDeep(selection) - s.to.tick = toTick - pianoRollStore.selection = s - - resizeNotesInSelectionRightBy(rootStore)(delta) - } - -export const resizeNotesInSelectionRightBy = - ({ - pianoRollStore: { selectedTrack, selectedNoteIds }, - pushHistory, - }: RootStore) => - (deltaDuration: number) => { - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - pushHistory() - - selectedTrack.updateEvents( - selectedNoteIds - .map((id) => { - const n = selectedTrack.getEventById(id) - if (n == undefined || !isNoteEvent(n)) { - return null - } - const duration = n.duration + deltaDuration - if (duration <= 0) { - // 幅がゼロになる場合は変形しない - // Do not deform if the width is zero - return { id } - } - return { - id, - duration, - } - }) - .filter(isNotNull), - ) - } - -export const startSelection = - ({ - pianoRollStore, - controlStore, - player, - controlStore: { quantizer }, - }: RootStore) => - (point: NotePoint, keepSelectedNoteIds: boolean = false) => { - if (!player.isPlaying) { - player.position = quantizer.round(point.tick) - } - - controlStore.selectedEventIds = [] - - if (!keepSelectedNoteIds) { - // deselect the notes - pianoRollStore.selectedNoteIds = [] - } - - // 選択範囲の右上を pos にする - // Set the upper right corner of the selection to POS - pianoRollStore.selection = { - from: point, - to: point, - } - } - -export const resetSelection = - ({ pianoRollStore }: RootStore) => - () => { - pianoRollStore.selection = null - pianoRollStore.selectedNoteIds = [] - } - -export const cloneSelection = - ({ - pianoRollStore, - pianoRollStore: { selection, selectedNoteIds, selectedTrack }, - }: RootStore) => - () => { - if (selectedTrack === undefined || selection === null) { - return - } - - // 選択範囲内のノートをコピーした選択範囲を作成 - // Create a selection that copies notes within selection - const notes = selectedNoteIds - .map((id) => selectedTrack.getEventById(id)) - .filter(isNotUndefined) - .map((note) => ({ - ...note, // copy - })) - selectedTrack.addEvents(notes) - pianoRollStore.selectedNoteIds = notes.map((e) => e.id) - } - -export const copySelection = - ({ - pianoRollStore: { selection, selectedNoteIds, selectedTrack }, - }: RootStore) => - () => { - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - const selectedNotes = selectedNoteIds - .map((id) => selectedTrack.getEventById(id)) - .filter(isNotUndefined) - .filter(isNoteEvent) - - const startTick = - selection?.from.tick ?? min(selectedNotes.map((note) => note.tick))! - - // 選択されたノートをコピー - // Copy selected note - const notes = selectedNotes.map((note) => ({ - ...note, - tick: note.tick - startTick, // 選択範囲からの相対位置にする - })) - - const data: PianoNotesClipboardData = { - type: "piano_notes", - notes, - } - - clipboard.writeText(JSON.stringify(data)) - } - -export const deleteSelection = - ({ - pianoRollStore, - pianoRollStore: { selection, selectedNoteIds, selectedTrack }, - pushHistory, - }: RootStore) => - () => { - if ( - selectedTrack === undefined || - (selectedNoteIds.length === 0 && selection === null) - ) { - return - } - - pushHistory() - - // 選択範囲と選択されたノートを削除 - // Remove selected notes and selected notes - selectedTrack.removeEvents(selectedNoteIds) - pianoRollStore.selection = null - pianoRollStore.selectedNoteIds = [] - } - -export const pasteSelection = - ({ player, pianoRollStore: { selectedTrack }, pushHistory }: RootStore) => - () => { - if (selectedTrack === undefined) { - return - } - // 現在位置にコピーしたノートをペースト - // Paste notes copied to the current position - const text = clipboard.readText() - if (!text || text.length === 0) { - return - } - const obj = JSON.parse(text) - if (!isPianoNotesClipboardData(obj)) { - return - } - - pushHistory() - - const notes = obj.notes.map((note) => ({ - ...note, - tick: note.tick + player.position, - })) - selectedTrack.addEvents(notes) - } - -export const duplicateSelection = - ({ - pianoRollStore, - pianoRollStore: { selection, selectedNoteIds, selectedTrack }, - pushHistory, - }: RootStore) => - () => { - if ( - selectedTrack === undefined || - selection === null || - selectedNoteIds.length === 0 - ) { - return - } - - pushHistory() - - // move to the end of selection - let deltaTick = selection.to.tick - selection.from.tick - - const selectedNotes = selectedNoteIds - .map((id) => selectedTrack.getEventById(id)) - .filter(isNotUndefined) - .filter(isNoteEvent) - - if (deltaTick === 0) { - const left = Math.min(...selectedNotes.map((n) => n.tick)) - const right = Math.max(...selectedNotes.map((n) => n.tick + n.duration)) - deltaTick = right - left - } - - const notes = selectedNotes.map((note) => ({ - ...note, - tick: note.tick + deltaTick, - })) - - // select the created notes - const addedNotes = selectedTrack.addEvents(notes) - const s = cloneDeep(selection) - s.from.tick += deltaTick - s.to.tick += deltaTick - pianoRollStore.selection = s - pianoRollStore.selectedNoteIds = addedNotes.map((n) => n.id) - } - -export const addNoteToSelection = - ({ pianoRollStore }: RootStore) => - (noteId: number) => { - pianoRollStore.selectedNoteIds.push(noteId) - } - -export const removeNoteFromSelection = - ({ - pianoRollStore, - pianoRollStore: { selectedNoteIds, selectedTrack }, - }: RootStore) => - (noteId: number) => { - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - pianoRollStore.selectedNoteIds = selectedNoteIds.filter( - (id) => id !== noteId, - ) - } - -export const selectNote = (rootStore: RootStore) => (noteId: number) => { - const { - pianoRollStore, - pianoRollStore: { selectedTrack }, - controlStore, - } = rootStore - - if (selectedTrack === undefined) { - return - } - - controlStore.selectedEventIds = [] - pianoRollStore.selectedNoteIds = [noteId] -} - -const sortedNotes = (notes: NoteEvent[]): NoteEvent[] => - notes.filter(isNoteEvent).sort((a, b) => { - if (a.tick < b.tick) return -1 - if (a.tick > b.tick) return 1 - if (a.noteNumber < b.noteNumber) return -1 - if (a.noteNumber > b.noteNumber) return 1 - return 0 - }) - -const selectNeighborNote = (rootStore: RootStore) => (deltaIndex: number) => { - const { - pianoRollStore: { selectedTrack, selectedNoteIds }, - } = rootStore - - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - const allNotes = selectedTrack.events.filter(isNoteEvent) - const selectedNotes = sortedNotes( - selectedNoteIds - .map((id) => allNotes.find((n) => n.id === id)) - .filter(isNotUndefined), - ) - if (selectedNotes.length === 0) { - return - } - const firstNote = sortedNotes(selectedNotes)[0] - const notes = sortedNotes(allNotes) - const currentIndex = notes.findIndex((n) => n.id === firstNote.id) - const nextNote = notes[currentIndex + deltaIndex] - if (nextNote === undefined) { - return - } - - selectNote(rootStore)(nextNote.id) -} - -export const selectNextNote = (rootStore: RootStore) => () => - selectNeighborNote(rootStore)(1) - -export const selectPreviousNote = (rootStore: RootStore) => () => - selectNeighborNote(rootStore)(-1) - -export const quantizeSelectedNotes = (rootStore: RootStore) => () => { - const { - pianoRollStore: { - selectedTrack, - selectedNoteIds, - enabledQuantizer: quantizer, - }, - } = rootStore - - if (selectedTrack === undefined || selectedNoteIds.length === 0) { - return - } - - pushHistory(rootStore)() - - const notes = selectedNoteIds - .map((id) => selectedTrack.getEventById(id)) - .filter(isNotUndefined) - .filter(isNoteEvent) - .map((e) => ({ - ...e, - tick: quantizer.round(e.tick), - })) - - selectedTrack.updateEvents(notes) -} - -export const selectAllNotes = - ({ pianoRollStore, pianoRollStore: { selectedTrack } }: RootStore) => - () => { - if (selectedTrack) { - pianoRollStore.selectedNoteIds = selectedTrack.events - .filter(isNoteEvent) - .map((note) => note.id) - } - } diff --git a/spaces/yegeta1243/Image-Models-Test130/app.py b/spaces/yegeta1243/Image-Models-Test130/app.py deleted file mode 100644 index 1dd67273a523f2655a42381f309cc071a542120a..0000000000000000000000000000000000000000 --- a/spaces/yegeta1243/Image-Models-Test130/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "digiplay/HIJKLMix_v2", - "yufengzheng/my_dreambooth_dog_200_random_images", - "skk412/sdxl-rrsk2", - "julien-c/autotrain-dreambooth-marsupilami", - "FFusion/400GB-LoraXL", - "matgu23/zws2", - "Yntec/lamettaRemix", - "jasonxxr666/lora-trained-xl-colab-v3", - "ProomptEngineer/pe-old-school-cartoon-style", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/data_util.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/data_util.py deleted file mode 100644 index 63b1bce8e089485182c962e830a163d6d0059da8..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/data/data_util.py +++ /dev/null @@ -1,305 +0,0 @@ -import cv2 -import numpy as np -import torch -from os import path as osp -from torch.nn import functional as F - -from basicsr.data.transforms import mod_crop -from basicsr.utils import img2tensor, scandir - - -def read_img_seq(path, require_mod_crop=False, scale=1): - """Read a sequence of images from a given folder path. - - Args: - path (list[str] | str): List of image paths or image folder path. - require_mod_crop (bool): Require mod crop for each image. - Default: False. - scale (int): Scale factor for mod_crop. Default: 1. - - Returns: - Tensor: size (t, c, h, w), RGB, [0, 1]. - """ - if isinstance(path, list): - img_paths = path - else: - img_paths = sorted(list(scandir(path, full_path=True))) - imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths] - if require_mod_crop: - imgs = [mod_crop(img, scale) for img in imgs] - imgs = img2tensor(imgs, bgr2rgb=True, float32=True) - imgs = torch.stack(imgs, dim=0) - return imgs - - -def generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'): - """Generate an index list for reading `num_frames` frames from a sequence - of images. - - Args: - crt_idx (int): Current center index. - max_frame_num (int): Max number of the sequence of images (from 1). - num_frames (int): Reading num_frames frames. - padding (str): Padding mode, one of - 'replicate' | 'reflection' | 'reflection_circle' | 'circle' - Examples: current_idx = 0, num_frames = 5 - The generated frame indices under different padding mode: - replicate: [0, 0, 0, 1, 2] - reflection: [2, 1, 0, 1, 2] - reflection_circle: [4, 3, 0, 1, 2] - circle: [3, 4, 0, 1, 2] - - Returns: - list[int]: A list of indices. - """ - assert num_frames % 2 == 1, 'num_frames should be an odd number.' - assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.' - - max_frame_num = max_frame_num - 1 # start from 0 - num_pad = num_frames // 2 - - indices = [] - for i in range(crt_idx - num_pad, crt_idx + num_pad + 1): - if i < 0: - if padding == 'replicate': - pad_idx = 0 - elif padding == 'reflection': - pad_idx = -i - elif padding == 'reflection_circle': - pad_idx = crt_idx + num_pad - i - else: - pad_idx = num_frames + i - elif i > max_frame_num: - if padding == 'replicate': - pad_idx = max_frame_num - elif padding == 'reflection': - pad_idx = max_frame_num * 2 - i - elif padding == 'reflection_circle': - pad_idx = (crt_idx - num_pad) - (i - max_frame_num) - else: - pad_idx = i - num_frames - else: - pad_idx = i - indices.append(pad_idx) - return indices - - -def paired_paths_from_lmdb(folders, keys): - """Generate paired paths from lmdb files. - - Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is: - - lq.lmdb - ├── data.mdb - ├── lock.mdb - ├── meta_info.txt - - The data.mdb and lock.mdb are standard lmdb files and you can refer to - https://lmdb.readthedocs.io/en/release/ for more details. - - The meta_info.txt is a specified txt file to record the meta information - of our datasets. It will be automatically created when preparing - datasets by our provided dataset tools. - Each line in the txt file records - 1)image name (with extension), - 2)image shape, - 3)compression level, separated by a white space. - Example: `baboon.png (120,125,3) 1` - - We use the image name without extension as the lmdb key. - Note that we use the same key for the corresponding lq and gt images. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - Note that this key is different from lmdb keys. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}') - input_folder, gt_folder = folders - input_key, gt_key = keys - - if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')): - raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb ' - f'formats. But received {input_key}: {input_folder}; ' - f'{gt_key}: {gt_folder}') - # ensure that the two meta_info files are the same - with open(osp.join(input_folder, 'meta_info.txt')) as fin: - input_lmdb_keys = [line.split('.')[0] for line in fin] - with open(osp.join(gt_folder, 'meta_info.txt')) as fin: - gt_lmdb_keys = [line.split('.')[0] for line in fin] - if set(input_lmdb_keys) != set(gt_lmdb_keys): - raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.') - else: - paths = [] - for lmdb_key in sorted(input_lmdb_keys): - paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)])) - return paths - - -def paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl): - """Generate paired paths from an meta information file. - - Each line in the meta information file contains the image names and - image shape (usually for gt), separated by a white space. - - Example of an meta information file: - ``` - 0001_s001.png (480,480,3) - 0001_s002.png (480,480,3) - ``` - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - meta_info_file (str): Path to the meta information file. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}') - input_folder, gt_folder = folders - input_key, gt_key = keys - - with open(meta_info_file, 'r') as fin: - gt_names = [line.split(' ')[0] for line in fin] - - paths = [] - for gt_name in gt_names: - basename, ext = osp.splitext(osp.basename(gt_name)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - gt_path = osp.join(gt_folder, gt_name) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paired_paths_from_folder(folders, keys, filename_tmpl): - """Generate paired paths from folders. - - Args: - folders (list[str]): A list of folder path. The order of list should - be [input_folder, gt_folder]. - keys (list[str]): A list of keys identifying folders. The order should - be in consistent with folders, e.g., ['lq', 'gt']. - filename_tmpl (str): Template for each filename. Note that the - template excludes the file extension. Usually the filename_tmpl is - for files in the input folder. - - Returns: - list[str]: Returned path list. - """ - assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. ' - f'But got {len(folders)}') - assert len(keys) == 2, ('The len of keys should be 2 with [input_key, gt_key]. ' f'But got {len(keys)}') - input_folder, gt_folder = folders - input_key, gt_key = keys - - input_paths = list(scandir(input_folder)) - gt_paths = list(scandir(gt_folder)) - assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: ' - f'{len(input_paths)}, {len(gt_paths)}.') - paths = [] - for gt_path in gt_paths: - basename, ext = osp.splitext(osp.basename(gt_path)) - input_name = f'{filename_tmpl.format(basename)}{ext}' - input_path = osp.join(input_folder, input_name) - assert input_name in input_paths, (f'{input_name} is not in ' f'{input_key}_paths.') - gt_path = osp.join(gt_folder, gt_path) - paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)])) - return paths - - -def paths_from_folder(folder): - """Generate paths from folder. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - - paths = list(scandir(folder)) - paths = [osp.join(folder, path) for path in paths] - return paths - - -def paths_from_lmdb(folder): - """Generate paths from lmdb. - - Args: - folder (str): Folder path. - - Returns: - list[str]: Returned path list. - """ - if not folder.endswith('.lmdb'): - raise ValueError(f'Folder {folder}folder should in lmdb format.') - with open(osp.join(folder, 'meta_info.txt')) as fin: - paths = [line.split('.')[0] for line in fin] - return paths - - -def generate_gaussian_kernel(kernel_size=13, sigma=1.6): - """Generate Gaussian kernel used in `duf_downsample`. - - Args: - kernel_size (int): Kernel size. Default: 13. - sigma (float): Sigma of the Gaussian kernel. Default: 1.6. - - Returns: - np.array: The Gaussian kernel. - """ - from scipy.ndimage import filters as filters - kernel = np.zeros((kernel_size, kernel_size)) - # set element at the middle to one, a dirac delta - kernel[kernel_size // 2, kernel_size // 2] = 1 - # gaussian-smooth the dirac, resulting in a gaussian filter - return filters.gaussian_filter(kernel, sigma) - - -def duf_downsample(x, kernel_size=13, scale=4): - """Downsamping with Gaussian kernel used in the DUF official code. - - Args: - x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w). - kernel_size (int): Kernel size. Default: 13. - scale (int): Downsampling factor. Supported scale: (2, 3, 4). - Default: 4. - - Returns: - Tensor: DUF downsampled frames. - """ - assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.' - - squeeze_flag = False - if x.ndim == 4: - squeeze_flag = True - x = x.unsqueeze(0) - b, t, c, h, w = x.size() - x = x.view(-1, 1, h, w) - pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2 - x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect') - - gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale) - gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0) - x = F.conv2d(x, gaussian_filter, stride=scale) - x = x[:, :, 2:-2, 2:-2] - x = x.view(b, t, c, x.size(2), x.size(3)) - if squeeze_flag: - x = x.squeeze(0) - return x diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/convert_dalle_to_flava_codebook.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/convert_dalle_to_flava_codebook.py deleted file mode 100644 index 7b544125114c85fcf01a881f460ae70472148c85..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/convert_dalle_to_flava_codebook.py +++ /dev/null @@ -1,102 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import os - -import torch - -from transformers import FlavaImageCodebook, FlavaImageCodebookConfig - - -def rreplace(s, old, new, occurrence): - li = s.rsplit(old, occurrence) - return new.join(li) - - -def count_parameters(state_dict): - # encoder.embeddings are double copied in original FLAVA - return sum(param.float().sum() if "encoder.embeddings" not in key else 0 for key, param in state_dict.items()) - - -def upgrade_state_dict(state_dict): - upgrade = {} - - group_keys = ["group_1", "group_2", "group_3", "group_4"] - for key, value in state_dict.items(): - for group_key in group_keys: - if group_key in key: - key = key.replace(f"{group_key}.", f"{group_key}.group.") - - if "res_path" in key: - key = key.replace("res_path.", "res_path.path.") - - if key.endswith(".w"): - key = rreplace(key, ".w", ".weight", 1) - if key.endswith(".b"): - key = rreplace(key, ".b", ".bias", 1) - - upgrade[key] = value.float() - - return upgrade - - -@torch.no_grad() -def convert_dalle_checkpoint(checkpoint_path, pytorch_dump_folder_path, config_path=None, save_checkpoint=True): - """ - Copy/paste/tweak model's weights to transformers design. - """ - from dall_e import Encoder - - encoder = Encoder() - if os.path.exists(checkpoint_path): - ckpt = torch.load(checkpoint_path) - else: - ckpt = torch.hub.load_state_dict_from_url(checkpoint_path) - - if isinstance(ckpt, Encoder): - ckpt = ckpt.state_dict() - encoder.load_state_dict(ckpt) - - if config_path is not None: - config = FlavaImageCodebookConfig.from_pretrained(config_path) - else: - config = FlavaImageCodebookConfig() - - hf_model = FlavaImageCodebook(config).eval() - state_dict = encoder.state_dict() - - hf_state_dict = upgrade_state_dict(state_dict) - hf_model.load_state_dict(hf_state_dict) - hf_state_dict = hf_model.state_dict() - hf_count = count_parameters(hf_state_dict) - state_dict_count = count_parameters(state_dict) - - assert torch.allclose(hf_count, state_dict_count, atol=1e-3) - - if save_checkpoint: - hf_model.save_pretrained(pytorch_dump_folder_path) - else: - return hf_state_dict - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.") - parser.add_argument("--checkpoint_path", default=None, type=str, help="Path to flava checkpoint") - parser.add_argument("--config_path", default=None, type=str, help="Path to hf config.json of model to convert") - args = parser.parse_args() - - convert_dalle_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path) diff --git a/spaces/youngtsai/Mandarin-TTS/bert/__init__.py b/spaces/youngtsai/Mandarin-TTS/bert/__init__.py deleted file mode 100644 index d7dcbe2c051f01fff99c3bf38113db9fceacaf6b..0000000000000000000000000000000000000000 --- a/spaces/youngtsai/Mandarin-TTS/bert/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .ProsodyModel import TTSProsody \ No newline at end of file diff --git a/spaces/ysharma/open-interpreter/app.py b/spaces/ysharma/open-interpreter/app.py deleted file mode 100644 index e54dc71e21f16ad9ea64b3f5a28311c0265654fe..0000000000000000000000000000000000000000 --- a/spaces/ysharma/open-interpreter/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import interpreter -import gradio as gr -import os -import io -import contextlib - -# Use environ variables for saving and using OpenAI API key as below or use from a textbox in gradio UI -# interpreter.api_key = os.environ.get('openai_api_key') - -interpreter.auto_run = True - -def chat_with_interpreter(message, history, openai_api_key): - interpreter.api_key = openai_api_key - if message == 'reset': - interpreter.reset() - # Redirect stdout to capture the streamed output - new_stdout = io.StringIO() - with contextlib.redirect_stdout(new_stdout): - interpreter.chat(message) - output = new_stdout.getvalue() - - # Return this output so Gradio's ChatInterface can display it - return output - -openai_api_key = gr.Textbox(label='OpenAI API Key', intercative=True) -additional_inputs = [openai_api_key] -examples=[["what is 2+2?"], - ["Can you solve for x: 10x -65=0?"], - ["What are top 10 headlines from BBC from last week?"] - ], - -demo = gr.ChatInterface(fn=chat_with_interpreter, - title="Open-Interpreter Gradio ChatInterface", - description="Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally", - clear_btn=None, - retry_btn=None, - undo_btn=None, - #examples=examples, - additional_inputs=additional_inputs, - additional_inputs_accordion_name = "OpenAI API Key", - ).queue() -demo.launch(debug=True) diff --git a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Attention_SSN.py b/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Attention_SSN.py deleted file mode 100644 index 7470535bc2a9c6bc166114ec3fb029365a279a32..0000000000000000000000000000000000000000 --- a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/models/Attention_SSN.py +++ /dev/null @@ -1,218 +0,0 @@ -from abc import abstractmethod -from functools import partial -from typing import Iterable -import math - -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .SSN import Conv, Conv2DMod, Decoder, Up -from .attention import AttentionBlock -from .blocks import ResBlock, Res_Type, get_activation - - -class Attention_Encoder(nn.Module): - def __init__(self, in_channels=3, mid_act='gelu', dropout=0.0, num_heads=8, resnet=True): - super(Attention_Encoder, self).__init__() - - self.in_conv = Conv(in_channels, 32-in_channels, stride=1, activation=mid_act, resnet=resnet) - self.down_32_64 = Conv(32, 64, stride=2, activation=mid_act, resnet=resnet) - self.down_64_64_1 = Conv(64, 64, activation=mid_act, resnet=resnet) - - self.down_64_128 = Conv(64, 128, stride=2, activation=mid_act, resnet=resnet) - self.down_128_128_1 = Conv(128, 128, activation=mid_act, resnet=resnet) - - self.down_128_256 = Conv(128, 256, stride=2, activation=mid_act, resnet=resnet) - self.down_256_256_1 = Conv(256, 256, activation=mid_act, resnet=resnet) - self.down_256_256_1_attn = AttentionBlock(256, num_heads) - - self.down_256_512 = Conv(256, 512, stride=2, activation=mid_act, resnet=resnet) - self.down_512_512_1 = Conv(512, 512, activation=mid_act, resnet=resnet) - self.down_512_512_1_attn = AttentionBlock(512, num_heads) - - self.down_512_512_2 = Conv(512, 512, activation=mid_act, resnet=resnet) - self.down_512_512_2_attn = AttentionBlock(512, num_heads) - - self.down_512_512_3 = Conv(512, 512, activation=mid_act, resnet=resnet) - self.down_512_512_3_attn = AttentionBlock(512, num_heads) - - - def forward(self, x): - x1 = self.in_conv(x) # 32 x 256 x 256 - x1 = torch.cat((x, x1), dim=1) - - x2 = self.down_32_64(x1) - x3 = self.down_64_64_1(x2) - - x4 = self.down_64_128(x3) - x5 = self.down_128_128_1(x4) - - x6 = self.down_128_256(x5) - x7 = self.down_256_256_1(x6) - x7 = self.down_256_256_1_attn(x7) - - x8 = self.down_256_512(x7) - x9 = self.down_512_512_1(x8) - x9 = self.down_512_512_1_attn(x9) - - x10 = self.down_512_512_2(x9) - x10 = self.down_512_512_2_attn(x10) - - x11 = self.down_512_512_3(x10) - x11 = self.down_512_512_3_attn(x11) - - return x11, x10, x9, x8, x7, x6, x5, x4, x3, x2, x1 - - -class Attention_Decoder(nn.Module): - def __init__(self, out_channels=3, mid_act='gelu', out_act='sigmoid', resnet = True, num_heads=8): - - super(Attention_Decoder, self).__init__() - - input_channel = 512 - fea_dim = 100 - - self.to_style1 = nn.Linear(in_features=fea_dim, out_features=input_channel) - - self.up_16_16_1 = Conv(input_channel, 256, activation=mid_act, style=True, resnet=resnet) - self.up_16_16_1_attn = AttentionBlock(256, num_heads=num_heads) - - self.up_16_16_2 = Conv(768, 512, activation=mid_act, resnet=resnet) - self.up_16_16_2_attn = AttentionBlock(512, num_heads=num_heads) - - self.up_16_16_3 = Conv(1024, 512, activation=mid_act, resnet=resnet) - self.up_16_16_3_attn = AttentionBlock(512, num_heads=num_heads) - - self.up_16_32 = Up(1024, 256, activation=mid_act, resnet=resnet) - self.to_style2 = nn.Linear(in_features=fea_dim, out_features=512) - self.up_32_32_1 = Conv(512, 256, activation=mid_act, style=True, resnet=resnet) - self.up_32_32_1_attn = AttentionBlock(256, num_heads=num_heads) - - self.up_32_64 = Up(512, 128, activation=mid_act, resnet=resnet) - self.to_style3 = nn.Linear(in_features=fea_dim, out_features=256) - self.up_64_64_1 = Conv(256, 128, activation=mid_act, style=True, resnet=resnet) - - self.up_64_128 = Up(256, 64, activation=mid_act, resnet=resnet) - self.to_style4 = nn.Linear(in_features=fea_dim, out_features=128) - self.up_128_128_1 = Conv(128, 64, activation=mid_act, style=True, resnet=resnet) - - self.up_128_256 = Up(128, 32, activation=mid_act, resnet=resnet) - self.out_conv = Conv(64, out_channels, activation=out_act) - self.out_act = get_activation(out_act) - - - def forward(self, x, style): - x11, x10, x9, x8, x7, x6, x5, x4, x3, x2, x1 = x - - style1 = self.to_style1(style) - y = self.up_16_16_1(x11, style1) # 256 x 16 x 16 - y = self.up_16_16_1_attn(y) - - y = torch.cat((x10, y), dim=1) # 768 x 16 x 16 - y = self.up_16_16_2(y, y) # 512 x 16 x 16 - y = self.up_16_16_2_attn(y) - - - y = torch.cat((x9, y), dim=1) # 1024 x 16 x 16 - y = self.up_16_16_3(y, y) # 512 x 16 x 16 - y = self.up_16_16_3_attn(y) - - y = torch.cat((x8, y), dim=1) # 1024 x 16 x 16 - y = self.up_16_32(y, y) # 256 x 32 x 32 - - y = torch.cat((x7, y), dim=1) - style2 = self.to_style2(style) - y = self.up_32_32_1(y, style2) # 256 x 32 x 32 - y = self.up_32_32_1_attn(y) - - y = torch.cat((x6, y), dim=1) - y = self.up_32_64(y, y) - - y = torch.cat((x5, y), dim=1) - style3 = self.to_style3(style) - - y = self.up_64_64_1(y, style3) # 128 x 64 x 64 - - y = torch.cat((x4, y), dim=1) - y = self.up_64_128(y, y) - - y = torch.cat((x3, y), dim=1) - style4 = self.to_style4(style) - y = self.up_128_128_1(y, style4) # 64 x 128 x 128 - - y = torch.cat((x2, y), dim=1) - y = self.up_128_256(y, y) # 32 x 256 x 256 - - y = torch.cat((x1, y), dim=1) - y = self.out_conv(y, y) # 3 x 256 x 256 - y = self.out_act(y) - return y - - - -class Attention_SSN(nn.Module): - def __init__(self, in_channels, out_channels, num_heads=8, resnet=True, mid_act='gelu', out_act='gelu'): - super(Attention_SSN, self).__init__() - self.encoder = Attention_Encoder(in_channels, mid_act, num_heads, resnet) - self.decoder = Attention_Decoder(out_channels, mid_act, out_act, resnet) - - - def forward(self, x, softness): - latent = self.encoder(x) - pred = self.decoder(latent, softness) - - return pred - - -def get_model_size(model): - param_size = 0 - import pdb; pdb.set_trace() - for param in model.parameters(): - param_size += param.nelement() * param.element_size() - - buffer_size = 0 - for buffer in model.buffers(): - buffer_size += buffer.nelement() * buffer.element_size() - - size_all_mb = (param_size + buffer_size) / 1024 ** 2 - print('model size: {:.3f}MB'.format(size_all_mb)) - # return param_size + buffer_size - return size_all_mb - - -if __name__ == '__main__': - model = AttentionBlock(in_channels=256, num_heads=8) - x = torch.randn(5, 256, 64, 64) - - y = model(x) - print('{}, {}'.format(x.shape, y.shape)) - - # ------------------------------------------------------------------ # - in_channels = 3 - out_channels = 1 - num_heads = 8 - resnet = True - mid_act = 'gelu' - out_act = 'gelu' - - model = Attention_SSN(in_channels=in_channels, - out_channels=out_channels, - num_heads=num_heads, - resnet=resnet, - mid_act=mid_act, - out_act=out_act) - - x = torch.randn(5, 3, 256, 256) - softness = torch.randn(5, 100) - - - y = model(x, softness) - - - print('x: {}, y: {}'.format(x.shape, y.shape)) - - get_model_size(model) - # ------------------------------------------------------------------ # diff --git a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/yuzu34/rvc-hololive/infer_pack/transforms.py b/spaces/yuzu34/rvc-hololive/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/yuzu34/rvc-hololive/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/zjrwtx/xiaoyi_drawing/app.py b/spaces/zjrwtx/xiaoyi_drawing/app.py deleted file mode 100644 index 3eed2b1a032041f60dde229b3b36516dcc758133..0000000000000000000000000000000000000000 --- a/spaces/zjrwtx/xiaoyi_drawing/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import streamlit as st -import os -import openai - -# openai.api_key = os.environ.get('openai.api_key') -# openai.api_key ="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" -#页面设置 -st.set_page_config( - page_title="小译学长|AI绘画(输入文本描述可生成对应图)", - page_icon=":robot:" -) -st.header("🔥AI绘画(输入文本描述可生成对应图)") - -#检查账号登陆 -def get_text1(): - if 'openai_key' not in st.session_state: - input_text1 = st.text_input("📫请输入你的账号: ", key="input") - if st.button("确认登陆!", key="input3"): - st.session_state['openai_key'] = input_text1 - return input_text1 - else: - return st.session_state['openai_key'] - -openai_key = get_text1() -if openai_key: - openai.api_key = openai_key - st.write("") -else: - # st.write("⚒️账号获取方式:扫描下方二维码或搜索关注微信公众号【正经人王同学】回复【小译学长】获取你的账号,然后将公共账号输入到此处,再两次点击【确认登陆!】就好") - st.image('https://pic4.zhimg.com/v2-401dd67cf027f85f53e4be3bd28dab5f_b.jpg') -st.markdown("""⚒️账号获取方式:[账号购买](https://xiaoyisenior.top/586e5309-d0a6-4e4c-bdc0-ae0ed450f55e) [正经人王同学|公众号](https://mp.weixin.qq.com/s?__biz=Mzg3ODcwNzk3Nw==&mid=2247485615&idx=1&sn=c691d496386b5972e36fea8eaee33b97&chksm=cf0edcc9f87955df95fa23a716d78d496e7456183da7cfa6b348db6988865da9060896abcdcd&token=1164458978&lang=zh_CN#rd) [抖音](https://www.douyin.com/user/MS4wLjABAAAAIdY0VlMSK0Shyd4FxHBgkXAtH4Zq8wsuKzIuSICWpy0) [小红书 ](https://www.xiaohongshu.com/user/profile/5f12a46a000000000101ff27)""") -prompt = st.text_input("📝告诉小译学长你想画的图是什么样的吧:") - -def image(prompt): - try: - images = openai.Image.create( - prompt=prompt, - n=4, - size="1024x1024" - ) - st.empty() - for image in images["data"]: - st.image(image["url"],width=300) - return - except Exception as e: - st.write("❌❌❌你的账号填写有误,请刷新页面重新填写正确的账号!") -if st.button("开始绘画"): - - - image(prompt) - - - - -st.write(""" -### 文本生成图技巧: - -👀描述词格式:主体(描述的是什么)+环境(在什么样的环境下)+风格(图片的风格是什么:二次元、古风、钢笔画等等) - -✏️描述词例子:上海外滩,白色背景,线稿,钢笔画,速写,4K,未来主义 -""") \ No newline at end of file diff --git a/spaces/zjxchina/vits_seki/transforms.py b/spaces/zjxchina/vits_seki/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/zjxchina/vits_seki/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet

    ' : '\U0001d4ab', - '\\' : '\U0001d4ac', - '\\' : '\U0000211b', - '\\' : '\U0001d4ae', - '\\' : '\U0001d4af', - '\\' : '\U0001d4b0', - '\\' : '\U0001d4b1', - '\\' : '\U0001d4b2', - '\\' : '\U0001d4b3', - '\\' : '\U0001d4b4', - '\\' : '\U0001d4b5', - '\\' : '\U0001d5ba', - '\\' : '\U0001d5bb', - '\\' : '\U0001d5bc', - '\\' : '\U0001d5bd', - '\\' : '\U0001d5be', - '\\' : '\U0001d5bf', - '\\' : '\U0001d5c0', - '\\' : '\U0001d5c1', - '\\' : '\U0001d5c2', - '\\' : '\U0001d5c3', - '\\' : '\U0001d5c4', - '\\' : '\U0001d5c5', - '\\' : '\U0001d5c6', - '\\' : '\U0001d5c7', - '\\' : '\U0001d5c8', - '\\