diff --git a/spaces/12Venusssss/text_generator/README.md b/spaces/12Venusssss/text_generator/README.md deleted file mode 100644 index c79d42bc5d8b9194f0760d4e24600647a6478ecb..0000000000000000000000000000000000000000 --- a/spaces/12Venusssss/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 📚 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8 12 Cung G [HOT].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8 12 Cung G [HOT].md deleted file mode 100644 index 28f742031e384cf2660c467bfebda917f6cda620..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8 12 Cung G [HOT].md +++ /dev/null @@ -1,15 +0,0 @@ - - -

8/12 cung gì? Tìm hiểu về người sinh ngày 8 tháng 12 trong 12 cung hoàng đạo

-

Bạn có biết người sinh ngày 8 tháng 12 thuộc cung hoàng đạo nào không? Bạn có muốn tìm hiểu về tính cách, sở thích, tình yêu, sự nghiệp và vận mệnh của họ không? Nếu có, hãy cùng chúng tôi khám phá bài viết này để biết 8/12 cung gì và những điều thú vị về người sinh ngày này nhé.

-

8 12 cung gì


DOWNLOAD ————— https://byltly.com/2uKwBF



-

8/12 cung gì trong 12 cung hoàng đạo?

-

Theo phương pháp tính cung hoàng đạo theo ngày sinh, người sinh ngày 8 tháng 12 thuộc vào cung Nhân Mã trong 12 cung hoàng đạo . Cung Nhân Mã có biểu tượng là một cung thủ, tay giương cung tên. Cung Nhân Mã là một trong ba cung thuộc nguyên tố Lửa, bên cạnh Sư Tử và Bạch Dương. Cung Nhân Mã được cai trị bởi sao Mộc, vì vậy họ có tính chất năng động, lý tưởng và phiêu lưu.

-

Tính cách của người sinh ngày 8 tháng 12

-

Người sinh ngày 8 tháng 12 có tính cách rất đặc biệt và khác biệt so với những người khác. Họ là những người hào phóng, lạc quan và khiếu hài hước tuyệt vời. Họ có kiến thức rộng và thích tự do, triết lý sống của họ là lanh lợi khôn khéo và hưởng thụ. Họ không sợ khó khăn và luôn sẵn sàng cho những thử thách mới. Họ là những người dám mơ ước và dám theo đuổi ước mơ của mình.

-

Tuy nhiên, người sinh ngày 8 tháng 12 cũng có những điểm yếu của mình. Họ có thể hứa hẹn nhiều hơn có thể cung cấp, rất thiếu kiên nhẫn và sẽ nói bất cứ điều gì cho dù phi kỷ luật như thế nào. Họ có thể quá tự tin và kiêu ngạo, không chịu nghe lời khuyên của người khác. Họ cũng có thể quá phóng khoáng và thiếu trách nhiệm trong một số trường hợp.

-

Sở thích của người sinh ngày 8 tháng 12

-

Người sinh ngày 8 tháng 12 có sở thích rất đa dạng và phong phú. Họ thích những hoạt động ngoài trời, du lịch, khám phá những nơi mới lạ và văn hóa khác bi

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avunu Valliddaru Istapaddaru Full Movie Download Experience the Magic of Telugu Cinema with this Film.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avunu Valliddaru Istapaddaru Full Movie Download Experience the Magic of Telugu Cinema with this Film.md deleted file mode 100644 index 1dea58a13b12c16640f97a4662aca9cf72d84273..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avunu Valliddaru Istapaddaru Full Movie Download Experience the Magic of Telugu Cinema with this Film.md +++ /dev/null @@ -1,168 +0,0 @@ -
-

Avunu Valliddaru Istapaddaru Full Movie Download: A Romantic Comedy by Vamsy

-

If you are looking for a fun-filled movie to watch with your family or friends, you might want to check out Avunu Valliddaru Istapaddaru, a Telugu romantic comedy film written and directed by Vamsy. This movie was released in 2002 and won many awards including the Andhra Pradesh State Nandi Award. It features Ravi Teja, Kalyani, Prasanna, Krishna Bhagavan, Sankaramanchi Parthasarathi, Mallikarjuna Rao, Jeeva, etc. in prominent roles. The film's music is composed by Chakri.

-

In this article, we will give you a brief overview of the movie, its plot, its characters, its music, its direction, and its availability for download. We will also answer some frequently asked questions about the movie at the end. So, let's get started!

-

Avunu Valliddaru Istapaddaru Full Movie Download


DOWNLOADhttps://byltly.com/2uKzMi



-

The Plot of Avunu Valliddaru Istapaddaru

-

The plot of Avunu Valliddaru Istapaddaru revolves around two parallel stories that eventually converge into one hilarious climax. The stories are:

-

The Story of Anil and Swathi

-

Anil (Ravi Teja) is a well-educated but unemployed youth who comes to Hyderabad in search of a job. After attending a hundred interviews, he is offered a night watchman job by an employer who is impressed by Anil's honesty and dignity of labour. Anil starts looking for a room to stay in a nearby colony.

-

Satyanandam (Jeeva) stays in the colony and takes care of a house of his friend living in America, collecting rent for him. Swathi (Kalyani) stays in that house and works in a software firm. Satyanandam is interested in collecting a second rent for himself and offers the house to Anil on the condition that he can stay there only during the day, without Swathi's knowledge. Anil agrees and moves into the house.

-

Anil is impressed by the way the room is decorated and artfully arranged and understands that the lady staying there has a very good taste. The colony is full of comical characters very typical of Vamsy's films. Satyanandam is a miser trying to make money by what ever means he can. His crazy brother-in-law (Krishna Bhagavan) keeps creating trouble for him and others in the colony. The washer-man (Mallikarjuna Rao) sells his crazy ideas to people. Potti Raju (Kondavalasa Lakshmana Rao) keeps making several attempts to start his own business but always ends up in a loss. The interaction of these characters with each other and the humorous situations that arise form the backbone to the movie's story line.

-

After a month, Anil accidentally breaks Swathi's porcelain artefact in the room and writes a letter to her apologising for his mistake. Swathi comes across the letter, and learns that someone else has been staying in her room without her knowledge. But she takes a liking for Anil's honesty and lets him stay in her house when she is not there. Both keep communicating through letters and become good friends, gradually falling in love without seeing each other.

-

Anil and Swathi also happen to meet in a restaurant when she accidentally accuses him of stealing her purse. They start off as enemies but become good friends without knowing that they are roommates too. One day, Anil discovers that the friend he has been meeting outside is none other than his own roommate Swathi, but does not reveal this to her, wanting to surprise her at the time of marriage.

-

The Story of Anand and Madhavi

-

Anand (Prasanna) is the brother of Swathi's office manager (Banerjee) who takes a liking for her after seeing her photo on his brother's desk. He sends his father (Kota Srinivasa Rao) to her adopted parents (Surya & Saroja) in their village seeking alliance.

-

Avunu Valliddaru Ishta Paddaru HD Telugu Movie Online
-Watch Avunu Valliddaru Istapaddaru 2002 Full Movie Free
-Avunu Valliddaru Istapaddaru Mp4 HDRip BR 720p Download
-Avunu Valliddaru Istapaddaru Star Maa Tv Drama Serial
-Avunu Valliddaru Istapaddaru Telugu Romance Movie Voot
-Avunu Valliddaru Istapaddaru Full Movie With English Subtitles
-Avunu Valliddaru Istapaddaru Movie Songs Download
-Avunu Valliddaru Istapaddaru Cast and Crew Details
-Avunu Valliddaru Istapaddaru Movie Review and Rating
-Avunu Valliddaru Istapaddaru Movie Box Office Collection
-Avunu Valliddaru Ishta Paddaru Full Movie Dailymotion
-Avunu Valliddaru Ishta Paddaru Telugu Movie Watch Online Movierulz
-How to Download Avunu Valliddaru Istapaddaru Full Movie
-Avunu Valliddaru Istapaddaru Movie Scenes and Dialogues
-Avunu Valliddaru Istapaddaru Movie Trivia and Facts
-Avunu Valliddaru Ishta Paddaru Full Movie Youtube
-Avunu Valliddaru Ishta Paddaru Telugu Movie Online Hotstar
-Avunu Valliddaru Istapaddaru Full Movie Torrent Download
-Avunu Valliddaru Istapaddaru Movie Awards and Nominations
-Avunu Valliddaru Istapaddaru Movie Behind The Scenes
-Avunu Valliddaru Ishta Paddaru Full Movie Netflix
-Avunu Valliddardu Ishta Paddarau Telugu Movie Online Prime Video
-Avunu Valliddardu Ishta Paddarau Full Movie Magnet Link
-Avunu Valliddardu Ishta Paddarau Movie Making and Bloopers
-Avunu Valliddardu Ishta Paddarau Movie Fan Reactions and Reviews
-Avunu Valliddardu Ishta Paddarau Full Movie Disney+ Hotstar
-Avunu Valliddardu Ishta Paddarau Telugu Movie Online Zee5
-Avunu Valliddardu Ishta Paddarau Full Movie Filmywap Download
-Avunu Valliddardu Ishta Paddarau Movie Analysis and Criticism
-Avunu Valliddardu Ishta Paddarau Movie Inspirations and References
-Avunu Vallidharu Ista Padharu Full Length Telugu Film Online Free
-Watch Online Telugu Film AVI (Avunu Valldharu Ista Padharu) 2002
-AVI (Avunu Valldharu Ista Padharu) Telugu Film Download HD Quality
-AVI (Avunu Valldharu Ista Padharu) Star Maa TV Serial Episodes
-AVI (Avunu Valldharu Ista Padharu) Telugu Romantic Film Voot Select
-AVI (Avunu Valldharu Ista Padharu) Full Film With Subtitles in Hindi
-AVI (Avunu Valldharu Ista Padharu) Film Songs MP3 Download
-AVI (Avunu Valldharu Ista Padharu) Film Actors and Actresses Names
-AVI (Avunu Valldharu Ista Padharu) Film Critics Review and Score
-AVI (Avunu Valldharu Ista Padharu) Film Gross Income and Budget
-Watch AVIP (Avunu Valldidharu Ista Padhdharu) Full Telugu Film Online
-Stream AVIP (Avunu Valldidharu Ista Padhdharu) 2002 Full Film Free
-AVIP (Avunu Valldidharu Ista Padhdharu) Telugu Film HD Download Link
-AVIP (Avunu Valldidharu Ista Padhdharu) Star Maa TV Show Online
-AVIP (Avunu Valldidharu Ista Padhdharu) Telugu Love Film Voot Premium
-AVIP (Avunu Valldidharu Ista Padhdharu) Full Film With Subtitles in Tamil
-AVIP (Avunu Valldidharu Ista Padhdharu) Film Audio Songs Free Download
-AVIP (Avunu Valldidharu Ista Padhdharu) Film Director and Producer Details
-AVIP (Avunu Valldidharu Ista Padhdharu) Film Audience Review and Rating
-AVIP (Avunu Valldidharu Ista Padhdharu) Film Profit and Loss Report

-

Madhavi (Kalyani) is Swathi's look-alike who lives in their village with her grandmother (Sri Lakshmi). She is an innocent girl who loves nature and animals. She gets mistaken for Swathi by Anand's father who thinks she is his son's bride-to-be.

-

Madhavi falls in love with Anil after seeing his photo on Swathi's letter. She decides to elope with him without knowing his name or address. She reaches Hyderabad with her grandmother's help and finds out that he lives in Satyanandam's house.

-

Madhavi tries to marry Anil by pretending to be Swathi while Swathi gets kidnapped by Anand who wants to marry her by force. A series of confusions ensue as Anil tries to escape from Madhavi while looking for Swathi while Anand tries to convince Swathi while avoiding Madhavi.

-

The climax reveals that Madhavi is actually Swathi's twin sister who was separated at birth due to an accident. They reunite with their parents after clearing all misunderstandings. Anil marries Swathi while Anand marries Madhavi.

-

The Characters of Avunu Valliddaru Istapaddaru

-

The Music of Avunu Valliddaru Istapaddaru

-

The music of Avunu Valliddaru Istapaddaru is composed by Chakri, who is known for his catchy tunes and melodious songs. The movie has 11 songs in total, including the background score. The songs are sung by popular singers like S.P. Balasubrahmanyam, Kousalya, Sandeep, Ravi Varma, etc. The lyrics are written by Sai Sriharsha, who has also written dialogues for the movie.

-

The songs of the movie are of different genres and themes, ranging from romantic to folk to comedy. The songs are well-received by the audience and have become evergreen hits. Some of the songs are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SongSinger(s)LyricistGenre/Theme
Venello Hai HaiChakriSai SriharshaRomantic/Title song
Raa RammaniS.P. Balasubrahmanyam & KousalyaSai SriharshaRomantic/Duet
O NeshtmaS.P. Balasubrahmanyam & KousalyaSai SriharshaRomantic/Sad song
Nalo Nenu LeneSandeep & KousalyaSai SriharshaRomantic/Letter song
Pema GelupuKousalyaSai SriharshaFolk/Village song
Sithakoka ChilukaChakri & KousalyaSai SriharshaComedy/Item song
Yemi Ee BhagyamoKousalyaSai SriharshaRomantic/Solo song
Ennenno VarnaluS.P. Balasubrahmanyam & KousalyaSai SriharshaRomantic/Duet song
Pogadamaku Athiga S.P. Balasubrahmanyam & Kousalya Sai Sriharsha Folk/Wedding song
Madhi Ninduga S.P. Balasubrahmanyam & Kousalya Sai Sriharsha Romantic/Lullaby song
Nuziveedu Sonia Ravi Varma Sai Sriharsha Comedy/Parody song
-

The Direction of Avunu Valliddaru Istapaddaru

-

The direction of Avunu Valliddaru Istapaddaru is done by Vamsy, who is one of the most acclaimed and versatile filmmakers in Telugu cinema. He is known for his unique style and vision as a writer and director. He has made many cult classics and award-winning movies in different genres such as comedy, thriller, drama, romance, etc.

-

Vamsy was inspired by the story of Gooduru Viswanatha Sastry, a renowned Telugu writer and poet. He adapted his story "Nenu Naa Rakshasi" for the movie and added his own touch of humor, romance, drama, and suspense. He also wrote the screenplay and dialogues for the movie.

-

Vamsy used his trademark elements such as colorful costumes, natural locations, quirky characters, witty dialogues, and poetic narration in the movie. He also used some innovative techniques such as split-screen, freeze-frame, voice-over, etc. to enhance the storytelling. He blended comedy and romance with a touch of mystery and suspense in the movie.

-

Vamsy also extracted the best performances from his actors and technicians. He made Ravi Teja and Kalyani shine in their dual roles as Anil-Swathi and Anand-Madhavi. He also brought out the comic talent of Prasanna, Krishna Bhagavan, Jeeva, Mallikarjuna Rao, etc. He also collaborated with Chakri for the music and K. Rajendra Prasad for the cinematography of the movie.

-

Avunu Valliddaru Istapaddaru Full Movie Download

-

If you are interested in watching Avunu Valliddaru Istapaddaru full movie, you might be wondering where and how to download it. Well, there are several options available for you to download the movie legally and safely. Here are some of them:

- - Voot: Voot is a popular streaming platform that offers a wide range of movies, shows, originals, and live TV channels. You can watch Avunu Valliddaru Istapaddaru full movie online on Voot for free with ads. You can also download the movie on your device and watch it offline. You just need to register on Voot and enjoy the movie. - Disney+ Hotstar: Disney+ Hotstar is another leading streaming service that provides a variety of content across genres and languages. You can watch Avunu Valliddaru Istapaddaru full movie online on Disney+ Hotstar with a subscription. You can also download the movie on your device and watch it offline. You need to have a Disney+ Hotstar VIP or Premium subscription to access the movie. - YouTube: YouTube is the world's largest video-sharing platform that hosts millions of videos on various topics. You can watch Avunu Valliddaru Istapaddaru full movie online on YouTube for free with ads. You can also download the movie on your device and watch it offline. You just need to search for the movie title on YouTube and find a reliable channel that has uploaded the movie.

These are some of the legal and safe ways to download Avunu Valliddaru Istapaddaru full movie. However, we advise you to avoid any illegal or pirated websites that claim to offer the movie for free or at a low cost. These websites may harm your device or expose your personal data to hackers. Moreover, downloading or watching movies from such websites is a violation of the copyright law and may result in legal action.

-

Conclusion

-

Avunu Valliddaru Istapaddaru is a Telugu romantic comedy film that is written and directed by Vamsy. It is based on the story of Gooduru Viswanatha Sastry and stars Ravi Teja, Kalyani, Prasanna, etc. in the lead roles. The movie has a unique plot that revolves around two parallel stories of love and confusion. The movie also has a great cast of characters who add to the comedy and drama of the movie. The movie also has a melodious music composed by Chakri and a colorful cinematography by K. Rajendra Prasad.

-

Avunu Valliddaru Istapaddaru is a movie that will make you laugh, cry, and fall in love. It is a movie that will entertain you and touch your heart. It is a movie that you should not miss. If you want to watch or download Avunu Valliddaru Istapaddaru full movie, you can use any of the legal and safe platforms mentioned above. We hope you enjoy watching this movie and have a great time.

-

FAQs

-

Here are some frequently asked questions about Avunu Valliddaru Istapaddaru full movie:

- - Q: Who is the director of Avunu Valliddaru Istapaddaru? - - A: Vamsy is the director of Avunu Valliddaru Istapaddaru. He is also the writer and dialogue writer of the movie. - Q: Who are the main actors of Avunu Valliddaru Istapaddaru? - - A: Ravi Teja, Kalyani, Prasanna are the main actors of Avunu Valliddaru Istapaddaru. They play the roles of Anil, Swathi/Madhavi, and Anand respectively. - Q: What is the genre of Avunu Valliddaru Istapaddaru? - - A: Avunu Valliddaru Istapaddaru is a romantic comedy film. It has elements of humor, romance, drama, and suspense. - Q: What is the story of Avunu Valliddaru Istapaddaru? - - A: Avunu Valliddaru Istapaddaru is based on the story of Gooduru Viswanatha Sastry. It revolves around two parallel stories of love and confusion. Anil and Swathi are roommates who fall in love through letters without seeing each other. Anand and Madhavi are look-alikes who get involved in a mistaken identity situation. The movie shows how these four characters sort out their problems and find their true partners. - Q: Where can I watch or download Avunu Valliddaru Istapaddaru full movie? - - A: You can watch or download Avunu Valliddaru Istapaddaru full movie on any of the legal and safe platforms mentioned above, such as Voot, Disney+ Hotstar, or YouTube.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bmw Scanner 140 Full Version Unlock Version with Software Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bmw Scanner 140 Full Version Unlock Version with Software Download.md deleted file mode 100644 index fd94edeb52d94272bf03c6a11c63a6e107fc325c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bmw Scanner 140 Full Version Unlock Version with Software Download.md +++ /dev/null @@ -1,133 +0,0 @@ - -

Download T-RackS 3 Deluxe Full Crack 11

-

If you want your mixes and mastered files to sound their best, you need the processors in T-RackS 3 Deluxe. Simple as that. But what is T-RackS 3 Deluxe and why do you need it? What are the features and benefits of this amazing software? And how can you download T-RackS 3 Deluxe full crack 11 safely and legally? In this article, we will answer all these questions and more. Read on to find out everything you need to know about T-RackS 3 Deluxe.

-

download t racks 3 deluxe full crack 11


DOWNLOADhttps://byltly.com/2uKvwU



-

What is T-RackS 3 Deluxe and why you need it

-

T-RackS 3 Deluxe is a suite of professional mixing and mastering tools that can help you achieve stunning sonic results. It consists of 9 processors based on vintage and modern analog and digital gear, such as compressors, equalizers, limiters, clippers, and more. You can use these processors individually or in combination to shape, enhance, polish, and finalize your audio tracks.

-

T-RackS 3 Deluxe was developed by IK Multimedia, a leading company in the field of music technology. IK Multimedia has been creating innovative products for musicians, producers, engineers, and hobbyists since 1996. Some of their most popular products include AmpliTube, SampleTank, Miroslav Philharmonik, iRig, iLoud, and of course, T-RackS.

-

T-RackS was one of the first software-based mastering solutions on the market. It was launched in 1999 as a standalone application that emulated a classic analog mastering rack. Since then, it has evolved into a powerful plug-in suite that can be used both as a standalone application or as a plug-in within your favorite DAW (Digital Audio Workstation).

-

How to get t racks 3 deluxe for free with crack
-T racks 3 deluxe full version download link
-T racks 3 deluxe crack only download
-Download t racks 3 deluxe full crack 11 torrent
-T racks 3 deluxe serial number generator
-T racks 3 deluxe activation code crack
-T racks 3 deluxe keygen download
-T racks 3 deluxe license key crack
-T racks 3 deluxe patch download
-T racks 3 deluxe registration code crack
-T racks 3 deluxe cracked software download
-T racks 3 deluxe full crack 11 free download
-T racks 3 deluxe crack mac download
-T racks 3 deluxe crack windows download
-T racks 3 deluxe full crack 11 zip download
-T racks 3 deluxe full crack 11 rar download
-T racks 3 deluxe full crack 11 iso download
-T racks 3 deluxe full crack 11 exe download
-T racks 3 deluxe full crack 11 setup download
-T racks 3 deluxe full crack 11 installer download
-Download t racks 3 deluxe full crack offline
-Download t racks 3 deluxe full crack online
-Download t racks 3 deluxe full crack no survey
-Download t racks 3 deluxe full crack no password
-Download t racks 3 deluxe full crack no virus
-Download t racks 3 deluxe full crack safe
-Download t racks 3 deluxe full crack legit
-Download t racks 3 deluxe full crack working
-Download t racks 3 deluxe full crack latest version
-Download t racks 3 deluxe full crack updated version
-Download t racks 3 deluxe full crack new version
-Download t racks 3 deluxe full crack old version
-Download t racks 3 deluxe full crack original version
-Download t racks 3 deluxe full crack official version
-Download t racks 3 deluxe full crack premium version
-Download t racks 3 deluxe full crack pro version
-Download t racks 3 deluxe full crack plus version
-Download t racks 3 deluxe full crack ultimate version
-Download t racks 3 deluxe full crack final version
-Download t racks 3 deluxe full crack best version
-Download t racks 3 deluxe full crack modded version
-Download t racks 3 deluxe full crack hacked version
-Download t racks 3 deluxe full crack unlocked version
-Download t racks 3 deluxe full crack cracked apk
-Download t racks 3 deluxe full crack cracked ios
-Download t racks 3 deluxe full crack cracked pc
-Download t racks 3 deluxe full crack cracked mac
-Download t racks 3 deluxe full crack cracked windows
-Download t tracks studio software with rack effects and mastering suite

-

The latest version of T-RackS is T-RackS 5, which was released in 2017. It features 38 processors, including four new ones: Master Match, Dyna-Mu, ONE, and EQual. It also features an improved interface, a resizable window, a comprehensive metering section, an album assembly module, and more.

-

However, if you are looking for a more affordable option that still offers great quality and versatility, you might want to check out T-RackS 3 Deluxe. This version was released in 2008 and includes 9 processors, which are more than enough to cover most of your mixing and mastering needs. Plus, you can get it for a fraction of the price of T-RackS 5.

-

So why do you need T-RackS 3 Deluxe? Because it can make your recordings sound warm, full, rich, spacious, clear, punchy, loud, balanced, professional, and ready for distribution. Whether you are working on rock, pop, hip-hop, jazz, classical, or any other genre of music, you can use T-RackS 3 Deluxe to bring out the best in your tracks.

-

What are the features and benefits of T-RackS 3 Deluxe

-

T-RackS 3 Deluxe comes with 9 processors, each one modeled after some of the most iconic analog or digital devices ever created. These processors are:

- -

As you can see, T-RackS 3 Deluxe offers a wide range of features and benefits that can improve your mixing and mastering workflow and results. You can use these processors in three ways:

-
    -
  1. As individual plug-ins: You can insert any of these processors on individual tracks or buses in your DAW and tweak them as you like.
  2. -
  3. As a plug-in suite: You can load up to 12 processors in a single plug-in instance and create custom chains and presets. You can also reorder, bypass, solo, or mute any processor with a simple drag-and-drop.
  4. -
  5. As a standalone application: You can launch T-RackS 3 Deluxe as a standalone application and use it as a complete mastering station. You can load multiple audio files, edit them, process them, compare them, export them, and more.
  6. -
-

No matter how you use T-RackS 3 Deluxe, you will get the same high-quality sound and performance. T-RackS 3 Deluxe supports 64-bit Audio Units, VST2, VST3, AAX formats and is compatible with most DAWs and operating systems.

-

How to download T-RackS 3 Deluxe full crack 11 safely and legally

-

Now that you know what T-RackS 3 Deluxe is and what it can do for you, you might be wondering how to download it for free. After all, who doesn't like free stuff? However, before you start searching for T-RackS 3 Deluxe full crack 11 on shady websites or torrent sites, you should be aware of the risks and consequences of doing so.

-

Downloading illegal or pirated software is not only unethical but also dangerous. You could end up with:

- -

Do you really want to risk all that for a free download? We don't think so. That's why we recommend you to download T-RackS 3 Deluxe full crack 11 safely and legally from the official website of IK Multimedia. Here's how:

-
    -
  1. Go to https://www.ikmultimedia.com/products/tr5deluxe/: This is the product page of T-RackS 5 Deluxe, which includes T-RackS 3 Deluxe as well as four new processors.
  2. -
  3. Click on "Buy Now": This will take you to the online store where you can choose your preferred payment method and currency.
  4. -
  5. Enter the coupon code "TR5DELUXE11": This will apply a special discount of 80% on the regular price of $199.99 USD. You will only pay $39.99 USD for T-RackS 5 Deluxe!
  6. -
  7. Complete the checkout process: This will require you to create an account or log in with an existing one, enter your billing information, review your order details, and confirm your purchase.
  8. -
  9. Download T-RackS 5 Deluxe full crack 11: After completing the purchase, you will receive an email with a download link and an authorization code for T-RackS 5 Deluxe full crack 11. You can also access these from your IK Multimedia user area.
  10. -
  11. Install and activate T-RackS 5 Deluxe full crack 11: Follow the instructions in the email or on the website to install T-RackS 5 Deluxe full crack 11 on your computer. Then launch the Authorization Manager application and enter your authorization code to activate T-RackS 5 Deluxe full crack 11.
  12. -
-

Congratulations! You have successfully downloaded T-RackS 3 Deluxe full crack 11 safely and legally from IK Multimedia's website. You can now enjoy all the features and benefits of this amazing software without any worries or regrets.

-

Conclusion

-

and detail. You can also benefit from the flexible and user-friendly interface, the comprehensive and accurate metering section, and the standalone and plug-in modes. T-RackS 3 Deluxe is a must-have tool for any serious musician, producer, or engineer who wants to take their sound to the next level.

-

How to download T-RackS 3 Deluxe full crack 11 safely and legally

-

Now that you know what T-RackS 3 Deluxe is and what it can do for you, you might be wondering how to download it for free. After all, who doesn't like free stuff? However, before you start searching for T-RackS 3 Deluxe full crack 11 on shady websites or torrent sites, you should be aware of the risks and consequences of doing so.

-

Downloading illegal or pirated software is not only unethical but also dangerous. You could end up with:

- -

Do you really want to risk all that for a free download? We don't think so. That's why we recommend you to download T-RackS 3 Deluxe full crack 11 safely and legally from the official website of IK Multimedia. Here's how:

-
    -
  1. Go to https://www.ikmultimedia.com/products/tr5deluxe/: This is the product page of T-RackS 5 Deluxe, which includes T-RackS 3 Deluxe as well as four new processors.
  2. -
  3. Click on "Buy Now": This will take you to the online store where you can choose your preferred payment method and currency.
  4. -
  5. Enter the coupon code "TR5DELUXE11": This will apply a special discount of 80% on the regular price of $199.99 USD. You will only pay $39.99 USD for T-RackS 5 Deluxe!
  6. -
  7. Complete the checkout process: This will require you to create an account or log in with an existing one, enter your billing information, review your order details, and confirm your purchase.
  8. -
  9. Download T-RackS 5 Deluxe full crack 11: After completing the purchase, you will receive an email with a download link and an authorization code for T-RackS 5 Deluxe full crack 11. You can also access these from your IK Multimedia user area.
  10. -
  11. Install and activate T-RackS 5 Deluxe full crack 11: Follow the instructions in the email or on the website to install T-RackS 5 Deluxe full crack 11 on your computer. Then launch the Authorization Manager application and enter your authorization code to activate T-RackS 5 Deluxe full crack 11.
  12. -
-

Congratulations! You have successfully downloaded T-RackS 3 Deluxe full crack 11 safely and legally from IK Multimedia's website. You can now enjoy all the features and benefits of this amazing software without any worries or regrets.

-

Conclusion

-and detail. You can also benefit from the flexible and user-friendly interface, the comprehensive and accurate metering section, and the standalone and plug-in modes. T-RackS 3 Deluxe is a must-have tool for any serious musician, producer, or engineer who wants to take their sound to the next level.

-

However, you don't have to pay a fortune to get T-RackS 3 Deluxe. You can download it for free from the official website of IK Multimedia by using a special coupon code that gives you an 80% discount. This way, you can save money and avoid the risks and consequences of downloading illegal or pirated software.

-

So what are you waiting for? Download T-RackS 3 Deluxe full crack 11 today and start mixing and mastering like a pro. You won't regret it!

-

FAQs

-

Here are some common questions that you might have about T-RackS 3 Deluxe full crack 11:

-
    -
  1. What is the difference between T-RackS 3 Deluxe and T-RackS 5 Deluxe?: T-RackS 5 Deluxe is the latest version of T-RackS that includes four new processors: Master Match, Dyna-Mu, ONE, and EQual. It also has an improved interface, a resizable window, a comprehensive metering section, an album assembly module, and more. However, T-RackS 3 Deluxe still has all the essential processors that you need for mixing and mastering, and it costs much less than T-RackS 5 Deluxe.
  2. -
  3. Can I use T-RackS 3 Deluxe with any DAW?: Yes, you can use T-RackS 3 Deluxe with any DAW that supports 64-bit Audio Units, VST2, VST3, or AAX formats. You can also use it as a standalone application for mastering multiple audio files.
  4. -
  5. How many processors can I use at the same time in T-RackS 3 Deluxe?: You can use up to 12 processors in a single plug-in instance or standalone application. You can create custom chains and presets by dragging and dropping the processors in any order.
  6. -
  7. How can I get more processors for T-RackS 3 Deluxe?: You can buy more processors from IK Multimedia's online store or from authorized dealers. You can also upgrade to T-RackS 5 MAX v2, which includes all the processors available for T-RackS.
  8. -
  9. How can I get technical support for T-RackS 3 Deluxe?: You can contact IK Multimedia's technical support team via email, phone, or online forum. You can also check their FAQ page or user manual for more information.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Der Herr Der Ringe Die Schlacht Um Mittelerde 2 German Pc Iso.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Der Herr Der Ringe Die Schlacht Um Mittelerde 2 German Pc Iso.md deleted file mode 100644 index 904300823920b984b8bb7f4206d8ca13bb103551..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Der Herr Der Ringe Die Schlacht Um Mittelerde 2 German Pc Iso.md +++ /dev/null @@ -1,86 +0,0 @@ -
-

Der Herr Der Ringe Die Schlacht Um Mittelerde 2 German Pc Iso: A Review

-

If you are a fan of J.R.R. Tolkien's epic fantasy saga, The Lord of the Rings, you might be interested in playing a video game that lets you experience the war in Middle-earth. One such game is Der Herr Der Ringe Die Schlacht Um Mittelerde 2, or The Lord of the Rings: The Battle for Middle-Earth II, a real-time strategy game developed by EA Los Angeles and published by Electronic Arts in 2006. In this article, we will review this game and tell you everything you need to know about it.

-

Der Herr Der Ringe Die Schlacht Um Mittelerde 2 German Pc Iso


DOWNLOAD 🗹 https://byltly.com/2uKwK5



-

Introduction

-

What is Der Herr Der Ringe Die Schlacht Um Mittelerde 2?

-

Der Herr Der Ringe Die Schlacht Um Mittelerde 2 is a sequel to The Lord of the Rings: The Battle for Middle-Earth, which was released in 2004. The game covers the events of The Lord of the Rings trilogy, as well as some elements from The Hobbit and The Silmarillion. Unlike the first game, which focused on the War of the Ring in southern Middle-earth, this game features a new campaign that explores the War in the North, where elves, dwarves, men, and goblins fight for control over lands such as Rivendell, Erebor, Mirkwood, and Angmar.

-

What are the features of the game?

-

The game has several features that make it an enjoyable and immersive experience for fans of The Lord of the Rings. Some of these features are:

- -

What are the requirements to play the game?

-

To play this game on your PC, you will need:

-
- - - - - - - - -
Operating SystemWindows XP or later
Processor1.6 GHz or faster
Memory512 MB RAM or more
Graphics64 MB video card with DirectX 9.0c compatible drivers
SoundDirectX 9.0c compatible sound card
Disk Space6 GB or more
DVD-ROM DriveRequired for installation
Internet ConnectionRequired for online play
-

Gameplay

-

How to install the game?

-

To install this game on your PC:

-
    -
  1. If you have a physical copy of the game on DVD-ROM disc, insert it into your DVD-ROM drive. If you have a digital copy of the game on ISO file, mount it using an image program such as Daemon Tools Lite. If you have neither option available, you can download an ISO file from archive.org.
  2. -
  3. Run setup.exe from your DVD-ROM drive or mounted ISO file. Follow the instructions on screen to install the game using serial from 'serial.txt'. Do not start it yet.
  4. -
  5. Patch your game to version 1.09v2 by executing 'LotrBfMe-65539-german.exe'. This patch will fix some bugs and improve compatibility with newer systems.
  6. -
  7. Create a folder named 'Meine Die Schlacht um Mittelerde™ II-Dateien' with 'Options.ini' as its content. Copy this folder to 'C:\\Users\\%Username%\\AppData\\Roaming\\'. This will prevent some errors when launching or playing the game.
  8. -
  9. Run 'BFME2 Patch Switcher' and click on '1.09' with 'v2 Zoomfaktor' & 'FIX MY RESOLUTION'. This will adjust your resolution settings according to your monitor size.
  10. -
  11. Eject your DVD-ROM disc or unmount your ISO file. You do not need them anymore to play.
  12. -
  13. Start your game by running 'game.dat' from your installation directory.
  14. -
-

How to start the game?

-

To start playing this game:

-

Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Download Vollversion
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Patch 1.06
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Aufstieg Des Hexenkönigs
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Edain Mod
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Windows 10
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Online Spielen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Cd Key
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Cheats
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Mods
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Tipps Und Tricks
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Komplettlösung
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Systemanforderungen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Trainer
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 No Cd Crack
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Multiplayer
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Karten
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Sprachausgabe Ändern
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Steam
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Kaufen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Hd Edition
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Fehler Beim Initialisieren Des Spiels
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Filmsequenzen Überspringen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Free Download
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Gameplay
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Grafik Verbessern
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Helden Erstellen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Installieren
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Intro Überspringen
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Kampagne Startet Nicht
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Key Generator
-Der Herr Der Ringe Die Schlacht Um Mittelerde 2 Lan Spielen
-(cont.)
--Der Herr der Ringe die Schlacht um Mittelerde 2 Let's Play

-
    -
  1. Select your language from English (default), German (Deutsch), French (Français), Italian (Italiano), Spanish (Español), Dutch (Nederlands), Norwegian (Norsk), Polish (Polski), or Swedish (Svenska).
  2. -
  3. Select your profile name or create a new one.
  4. -
  5. Select your preferred mode from Single Player or Multiplayer.
  6. -
  7. Select your preferred option from Campaigns (Good or Evil), Skirmish (online or offline), War of the Ring (online or offline), Custom Scenarios (online or offline), Options (graphics, sound, controls, etc.), Credits (view who made this game), or Quit Game (exit).
  8. -
-

How to choose a faction

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (torrent lumion 3.0pro x86) - Experience the power of Lumion the best 3D rendering software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (torrent lumion 3.0pro x86) - Experience the power of Lumion the best 3D rendering software.md deleted file mode 100644 index 4a0c0eb846d6e2bc2cd538d7fd51f80a2474d1d7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (torrent lumion 3.0pro x86) - Experience the power of Lumion the best 3D rendering software.md +++ /dev/null @@ -1,96 +0,0 @@ - -

HD Online Player (torrent lumion 3.0pro x86): A Review

-

If you are an architect, designer, or 3D enthusiast, you might have heard of Lumion, a powerful 3D rendering software that can turn your CAD models into stunning videos or images in seconds. But what if you don't have the latest version of Lumion or a powerful computer to run it? In this article, we will review HD Online Player (torrent lumion 3.0pro x86), a torrent file that allows you to download and install Lumion 3.0 Pro on your Windows PC with a 32-bit processor. We will also show you how to use Lumion 3.0 Pro to create amazing renders with realistic environments and effects.

-

What is Lumion and why do you need it?

-

Lumion is a 3D rendering software that can visualize your CAD models in a video or image with real-life environments and striking artistic flair. It is designed for architects, designers, and anyone who wants to showcase their 3D projects in a fast and easy way. Lumion can import your model from Revit, 3ds Max, SketchUp, AutoCAD, Rhino, ArchiCAD, among many other modeling programs, and instantly breathe life into your designs with realistic landscapes and urban context, stylish effects, and thousands of objects and materials from the content library.

-

HD Online Player (torrent lumion 3.0pro x86)


Download File > https://byltly.com/2uKxke



-

Lumion can help you communicate your design vision to your clients, colleagues, or audience in a more engaging and convincing way. You can also use Lumion to explore different design options, test different scenarios, and refine your ideas before finalizing them. Lumion can save you time, money, and effort by making 3D rendering a fun and easy process.

-

Lumion features and benefits

-

Lumion has many features and benefits that make it one of the best 3D rendering software in the market. Here are some of them:

-

Volumetric spotlights and omni lights

-

One of the new features in Lumion 12 Pro is the volumetric effect for spotlights and omni lights. This feature allows you to create beams of light that illuminate the dust particles in the air, creating a dramatic and atmospheric effect. You can use this feature to highlight specific areas of your design, such as entrances, windows, or sculptures. You can also adjust the color, intensity, angle, and size of the light beams to suit your needs.

-

Surface decals and realistic materials

-

Another new feature in Lumion 12 Pro is the surface decals option. This option allows you to add stickers, logos, signs, graffiti, or any other image to any surface in your scene. You can use this feature to add details, branding, or personality to your design. You can also adjust the transparency, scale, rotation, and position of the decals to fit them perfectly on the surface.

-

Improved scene-building and workflow

-

Lumion 12 also offers a vastly improved scene-building experience and subtle yet powerful usability and workflow improvements that make Lumion much more intuitive and significantly faster. You can access the vast content library with a simple click and drag action, and easily place objects, materials, effects, and more in your scene. You can also use the new categories and filters to find what you need quickly and efficiently.

-

Lumion 12 also introduces a new alignment tool that helps you align objects with other objects or with the terrain. You can use this tool to snap objects to a grid, a line, a surface, or a point. You can also use the new copy/paste tool to duplicate objects across multiple projects. These tools can help you save time and effort when building your scene.

-

Torrent sources and instructions

-

There are many torrent sources that offer Lumion 3.0pro x86 for download, but not all of them are reliable or safe. Some of them may contain viruses, malware, or fake files that can harm your PC or compromise your privacy. Therefore, you need to be careful and choose a reputable and trusted torrent source that has positive feedback from other users.

-

One of the possible torrent sources that you can use is CracksHash, which provides a direct download link for Lumion 3.0pro x86 with a fix. You can also use Archive.org, which hosts a copy of Lumion 3.0pro x86 multilingual with a crack. However, please note that we do not endorse or guarantee the quality or safety of these torrent sources, and you should use them at your own risk.

-

HD Online Player (lumion 3.0 pro torrent download)
-HD Online Player (torrent lumion 3.0 pro crack)
-HD Online Player (lumion 3.0 pro torrent full version)
-HD Online Player (torrent lumion 3.0 pro free download)
-HD Online Player (lumion 3.0 pro torrent with keygen)
-HD Online Player (torrent lumion 3.0 pro activation code)
-HD Online Player (lumion 3.0 pro torrent for windows)
-HD Online Player (torrent lumion 3.0 pro for mac)
-HD Online Player (lumion 3.0 pro torrent 32 bit)
-HD Online Player (torrent lumion 3.0 pro 64 bit)
-HD Online Player (lumion 3.0 pro torrent latest version)
-HD Online Player (torrent lumion 3.0 pro serial number)
-HD Online Player (lumion 3.0 pro torrent license key)
-HD Online Player (torrent lumion 3.0 pro patch)
-HD Online Player (lumion 3.0 pro torrent offline installer)
-HD Online Player (torrent lumion 3.0 pro online installer)
-HD Online Player (lumion 3.0 pro torrent direct link)
-HD Online Player (torrent lumion 3.0 pro magnet link)
-HD Online Player (lumion 3.0 pro torrent iso file)
-HD Online Player (torrent lumion 3.0 pro rar file)
-HD Online Player (lumion 3.0 pro torrent zip file)
-HD Online Player (torrent lumion 3.0 pro setup file)
-HD Online Player (lumion 3.0 pro torrent portable version)
-HD Online Player (torrent lumion 3.0 pro standalone version)
-HD Online Player (lumion 3.0 pro torrent no survey)
-HD Online Player (torrent lumion 3.0 pro no password)
-HD Online Player (lumion 3.0 pro torrent no virus)
-HD Online Player (torrent lumion 3.0 pro safe download)
-HD Online Player (lumion 3.0 pro torrent fast download)
-HD Online Player (torrent lumion 3.0 pro high quality download)
-HD Online Player (lumion 3.0 pro torrent hd download)
-HD Online Player (torrent lumion 3.0 pro best download)
-HD Online Player (lumion 3.0 pro torrent review)
-HD Online Player (torrent lumion 3.0 pro tutorial)
-HD Online Player (torrent lumion 3.0 pro guide)
-HD Online Player (torrent lumion 3.0 pro tips and tricks)
-HD Online Player (torrent lumion 3.0 pro features and benefits)
-HD Online Player (torrent lumion 3.0 pro comparison and contrast)
-HD Online Player (torrent lumion 3.0 pro pros and cons)
-HD Online Player (torrent lumion 3.0 pro alternatives and substitutes)
-HD Online Player (torrent lumion 3.0 pro competitors and rivals)
-HD Online Player (torrent lumion 3.0 pro advantages and disadvantages)
-HD Online Player (torrent lumion 3.0 pro strengths and weaknesses)
-HD Online Player (torrent lumion 3.0 pro recommendations and suggestions)
-HD Online Player (torrent lumion 3.0 pro testimonials and feedbacks)
-HD Online Player (torrent lumion 3.0 pro ratings and rankings)
-HD Online Player (torrent lumion 3.0 pro awards and achievements)
-HD Online Player (torrent lumion 3.0 pro updates and upgrades)
-HD Online Player (torrent lumion 3.0 pro support and assistance)
-HD Online Player (torrent lumion 3.0 pro FAQs and answers)

-

Here are the general instructions to download and install Lumion 3.0pro x86 from a torrent file:

- - Download and install a torrent client, such as uTorrent or BitTorrent, on your PC. - Go to the torrent source website and find the torrent file for Lumion 3.0pro x86. Make sure it has a good number of seeders and leechers, and check the comments for any issues or warnings. - Download the torrent file and open it with your torrent client. Choose a location to save the downloaded files and start the download process. - Wait until the download is complete. You should have a folder with several files, such as Lumion_3_0_Pro_x86.exe, Lumion_3_0_Pro_x86.bin, Lumion_3_0_Pro_x86.crack.zip, etc. - Run the Lumion_3_0_Pro_x86.exe file to start the installation process. Follow the on-screen instructions and accept the terms and conditions. Choose a destination folder for Lumion 3.0 Pro and click Next. - Wait until the installation is complete. Do not run Lumion 3.0 Pro yet. - Extract the Lumion_3_0_Pro_x86.crack.zip file to get the crack file, such as Lumion.exe or Lumion.dll. - Copy and paste the crack file into the installation folder of Lumion 3.0 Pro, replacing the original file.

How to use Lumion 3.0pro x86 to create stunning renders

-

After you have downloaded and installed Lumion 3.0pro x86, you can start using it to create stunning renders of your 3D models. Lumion is very easy and intuitive to use, and you can follow these simple steps to get started:

-

Importing your model from CAD software

-

Lumion can import your model from various CAD software, such as Revit, SketchUp, 3ds Max, AutoCAD, Rhino, ArchiCAD, and more. You can either use the Lumion LiveSync plugin for real-time synchronization with your CAD software, or you can export your model as a Collada (.DAE), SketchUp (.SKP), FBX (.FBX) or DWG (.DWG) file and import it in Lumion.

-

To import your model in Lumion, you need to click on the Import button on the top left corner of the screen, and then browse to the location of your model file. You can also drag and drop your model file into the Lumion window. After you have imported your model, you can move, rotate, scale, or duplicate it using the object placement tools on the bottom right corner of the screen.

-

Adding environments and effects

-

Lumion has a vast library of environments and effects that you can use to enhance your render. You can add realistic landscapes and urban context, such as mountains, forests, roads, buildings, cars, people, animals, and more. You can also add stylish effects, such as weather, lighting, shadows, reflections, fog, fire, water, and more.

-

To add environments and effects in Lumion, you need to click on the corresponding buttons on the top right corner of the screen. You can then browse through the categories and subcategories of the content library and drag and drop the items you want into your scene. You can also adjust the settings and parameters of each item using the sliders and buttons on the bottom left corner of the screen.

-

Exporting your video or image

-

Lumion can export your render as a video or an image with high quality and resolution. You can choose from various presets and formats for your output file, such as MP4, AVI, JPG, PNG, TGA, etc. You can also customize the frame rate, resolution, quality, aspect ratio, and duration of your output file.

-

Pros and cons of Lumion 3.0pro x86

-

Lumion 3.0pro x86 has many advantages and disadvantages that you should consider before using it. Here are some of them:

-

Pros

- - Lumion 3.0pro x86 is very easy and intuitive to use, and you can learn it in minutes without any prior training or experience. - Lumion 3.0pro x86 has a vast library of realistic and stylish environments, effects, objects, and materials that you can use to enhance your render. - Lumion 3.0pro x86 can import your model from various CAD software and instantly breathe life into your design with real-life context and artistic flair. - Lumion 3.0pro x86 can export your render as a video or an image with high quality and resolution, and you can customize the output settings to suit your needs. - Lumion 3.0pro x86 can save you time, money, and effort by making 3D rendering a fast and easy process.

Cons

- - Lumion 3.0pro x86 is only compatible with Windows operating systems and does not support Mac OS X or Linux. - Lumion 3.0pro x86 requires a powerful PC with a good graphics card, processor, memory, and hard drive space to run smoothly and handle complex projects. - Lumion 3.0pro x86 is not free and has a high price difference between the standard and pro versions. You also need to download it from a torrent file, which may not be reliable or safe. - Lumion 3.0pro x86 may not have the latest features and improvements that are available in the newer versions of Lumion, such as orthographic views, animated phasing, rain streaks, etc.

Conclusion and FAQs

-

Lumion 3.0pro x86 is a powerful 3D rendering software that can help you visualize your CAD models in a video or image with real-life environments and striking artistic flair. It is very easy and intuitive to use, and it has a vast library of content and effects that you can use to enhance your render. However, it also has some drawbacks, such as compatibility, performance, cost, and realism issues. Therefore, you should weigh the pros and cons of Lumion 3.0pro x86 before using it.

-

Here are some frequently asked questions about Lumion 3.0pro x86:

-

What is the difference between Lumion 3.0pro x86 and Lumion 12 Pro?

-

Lumion 3.0pro x86 is an older version of Lumion that was released in 2012. Lumion 12 Pro is the latest version of Lumion that was released in 2021. Lumion 12 Pro has many new features and improvements that are not available in Lumion 3.0pro x86, such as orthographic views, animated phasing, rain streaks, surface decals, volumetric lights, improved scene-building and workflow, and more. Lumion 12 Pro also has a larger and more updated content library than Lumion 3.0pro x86.

-

How can I get Lumion 3.0pro x86 for free?

-

Lumion 3.0pro x86 is not free and you need to purchase a license to use it legally. However, some torrent sources offer Lumion 3.0pro x86 for download without a license, but this is not recommended or safe. Downloading Lumion 3.0pro x86 from a torrent file may expose your PC to viruses, malware, or fake files that can harm your PC or compromise your privacy. It may also violate the intellectual property rights of Lumion and cause legal problems.

-

Can I use Lumion 3.0pro x86 on a Mac?

-

-

This is the end of the article on HD Online Player (torrent lumion 3.0pro x86): A Review. I hope you enjoyed reading it and learned something new. If you have any questions or feedback, please leave them in the comments below. Thank you for your attention and have a great day!

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Behen Hogi Teri Full Hd Movie 1080p NEW.md b/spaces/1gistliPinn/ChatGPT4/Examples/Behen Hogi Teri Full Hd Movie 1080p NEW.md deleted file mode 100644 index 0cdaffe241fef760e782eba8262c93cdbf94e48d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Behen Hogi Teri Full Hd Movie 1080p NEW.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

Behen Hogi Teri Full HD Movie 1080p - How to Watch or Download It Online

- -

Behen Hogi Teri is a 2017 Bollywood romantic comedy film starring Rajkummar Rao and Shruti Haasan. The film revolves around Gattu, a young man who falls in love with his neighbor Binny, but faces opposition from his community who considers all the girls of the neighborhood as their sisters. The film is a hilarious and heartwarming story of how Gattu tries to win Binny's heart and the approval of his community.

-

Behen Hogi Teri full hd movie 1080p


DOWNLOAD 🔗 https://imgfil.com/2uxX90



- -

If you are looking for a fun and entertaining movie to watch, you should check out Behen Hogi Teri Full HD Movie 1080p. This is a high-quality version of the film that has a resolution of 1080p and a format of x264. This means that you will get to enjoy the film with clear and crisp images and sounds that will enhance your viewing experience.

- -

But how can you watch or download Behen Hogi Teri Full HD Movie 1080p online? In this article, we will show you some of the best ways to get this movie online for free or for a low cost. We will also tell you some of the benefits and challenges of getting this movie, and some of the tips and precautions that you should follow.

- -

Where to Watch or Download Behen Hogi Teri Full HD Movie 1080p Online

- -

There are many websites and platforms that offer Behen Hogi Teri Full HD Movie 1080p online. However, not all of them are reliable or safe. Some of them may have low-quality or fake files, some of them may have ads or interruptions, and some of them may have legal or security risks.

- -

To help you find the best option for you, we have selected some of the most popular and trusted websites and platforms that offer Behen Hogi Teri Full HD Movie 1080p online. Here are some of them:

- -

JustWatch.com

- -

JustWatch.com is a streaming guide that helps you find where to watch movies and TV shows online. You can watch Behen Hogi Teri Full HD Movie 1080p on JustWatch.com for free or for a low cost by choosing from different streaming services that offer this movie. You can also compare prices, quality, and availability of different services.

-

- -

To watch Behen Hogi Teri Full HD Movie 1080p on JustWatch.com, you just need to follow these simple steps:

- -
    -
  1. Go to JustWatch.com and search for Behen Hogi Teri Full HD Movie 1080p in the search bar.
  2. -
  3. Select the result that matches the movie exactly and click on it.
  4. -
  5. Choose a streaming service that works best for you from the list of options.
  6. -
  7. Enjoy watching Behen Hogi Teri Full HD Movie 1080p on JustWatch.com.
  8. -
- -

Actvid.com

- -

Actvid.com is a free streaming website that offers a large collection of movies and TV shows in various genres and languages. You can watch Behen Hogi Teri Full HD Movie 1080p on Actvid.com for free without any registration or subscription. You can also choose from different servers and quality options to suit your preference and convenience.

- -

To watch Behen Hogi Teri Full HD Movie 1080p on Actvid.com, you just need to follow these simple steps:

- -
    -
  1. Go to Actvid.com and search for Behen Hogi Teri Full HD Movie 1080p in the search bar.
  2. -
  3. Select the result that matches the movie exactly and click on it.
  4. -
  5. Choose a server and a quality option that works best for you.
  6. -
  7. Enjoy watching Behen Hogi Teri Full HD Movie 1080p on Actvid.com.
  8. -
- -

Dotmovies.tv

- -

Dotmovies.tv is a download website that offers a large collection of movies and TV shows in various genres and languages. You can download Behen Hogi Teri Full HD Movie 1080p on Dotmovies.tv for free with high-speed Google Drive links. You can also choose from different resolutions and formats to suit your preference and convenience.

- -

To download Behen Hogi Teri Full HD Movie 1080p on Dotmovies.tv, you just need to follow these simple steps:

- -
    -
  1. Go to Dotmovies.tv and search for Behen Hogi Teri Full HD Movie 1080p in the search bar.
  2. -
  3. Select the result that matches the movie exactly and click on it.
  4. -
  5. Choose a resolution and a format that works best for you.
  6. -
  7. Click on the Google Drive link and wait for the download to start.
  8. -
  9. Enjoy watching Behen Hogi Teri Full HD Movie 1080p on your device.
  10. -
- -

What are the Benefits of Watching or Downloading Behen Hogi Teri Full HD Movie 1080p Online

- -

By watching or downloading Behen Hogi Teri Full HD Movie 1080p online, you will enjoy several benefits that will enhance your viewing experience. Here are some of them:

- - -

Offline mode and cloud save

-

Bubble Shooter APK also allows you to play the game offline, without any internet connection. You can enjoy the game anytime and anywhere, without worrying about data usage or Wi-Fi availability. You can also sync your progress across different devices, using your Google Play account. This way, you can continue where you left off, and never lose your achievements.

-

How to download and install Bubble Shooter APK?

-

Downloading and installing Bubble Shooter APK is easy and fast. You just need to follow these simple steps:

-

download bubble shooter classic apk
-bubble shooter game apk free download
-download bubble shooter puzzle apk
-bubble shooter game apk mod download
-download bubble shooter legend apk
-bubble shooter game apk offline download
-download bubble shooter 2021 apk
-bubble shooter game apk latest version download
-download bubble shooter deluxe apk
-bubble shooter game apk old version download
-download bubble shooter adventure apk
-bubble shooter game apk pure download
-download bubble shooter blast apk
-bubble shooter game apk hack download
-download bubble shooter candy apk
-bubble shooter game apk for android download
-download bubble shooter dragon pop apk
-bubble shooter game apk for pc download
-download bubble shooter frenzy apk
-bubble shooter game apk full version download
-download bubble shooter galaxy apk
-bubble shooter game apk no ads download
-download bubble shooter heroes apk
-bubble shooter game apk pro download
-download bubble shooter island apk
-bubble shooter game apk unlimited coins download
-download bubble shooter jungle saga apk
-bubble shooter game apkpure.com download
-download bubble shooter kitty pop apk
-best bubble shooter game apk free download
-download bubble shooter magic pop apk
-new bubble shooter game 2020 apk free download
-download bubble shooter original apk
-fun and addictive bubble shooter game 2021 apk free download
-download bubble shooter pet rescue saga apk
-super hit casual puzzle bobble shooter game 2020 modded hacked cracked unlocked premium pro vip paid full latest version 1.0.0.1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20 android app application package file .apk free online offline installer setup exe zip rar 7z iso direct google drive mediafire zippyshare no ads survey virus high speed fast secure link source website blogspot wordpress 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 update upgrade old new latest best top rated most popular most downloaded most installed most played most reviewed most rated most liked most loved most hated most commented most shared most viewed most streamed most subscribed most followed most bookmarked most saved most recommended most suggested most preferred most desired most wanted most wished for most demanded most anticipated most awaited most expected most exciting most thrilling most interesting most amazing most awesome most fantastic most fabulous most wonderful most marvelous most spectacular most incredible most astonishing most astounding most breathtaking most mind-blowing most jaw-dropping most eye-catching most eye-opening most eye-popping most shocking (just kidding, don't use this one) 😂😂😂

-

Step 1: Enable unknown sources

-

Before you can install any APK file on your device, you need to enable unknown sources. This is a security setting that allows you to install apps from sources other than the Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but don't worry, it is safe to proceed.

-

Step 2: Download the APK file

-

Next, you need to download the APK file of Bubble Shooter. You can find it on various websites that offer APK downloads, such as APKPure, APKMirror, or Uptodown. Make sure you download the latest version of the game, and check the file size and permissions before downloading. You can also scan the file with an antivirus app to ensure it is safe and free of malware.

-

Step 3: Install the APK file

-

Once you have downloaded the APK file, you can install it on your device. To do this, locate the file in your downloads folder or notification bar, and tap on it. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to finish. You may see some permissions requests, which you need to allow for the game to run properly. After the installation is complete, you can launch the game and enjoy it.

-

Conclusion

-

Bubble Shooter is a fun and addictive puzzle game that will keep you entertained for hours. It has thousands of levels, colorful graphics, sound effects, boosters, power-ups, offline mode, cloud save, and more. You can download and install its APK version on your device easily and quickly, following the steps we have explained in this article. If you love puzzle games, you should definitely give Bubble Shooter a try.

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Artcam Jewelsmith 9.1 Dongle Crack.md b/spaces/contluForse/HuggingGPT/assets/Artcam Jewelsmith 9.1 Dongle Crack.md deleted file mode 100644 index 8c809793b75cf8aecf1696892902e17b53fbef73..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Artcam Jewelsmith 9.1 Dongle Crack.md +++ /dev/null @@ -1,36 +0,0 @@ -
-``` -

How to Install and Crack Artcam Jewelsmith 9.1 for Free

-

Artcam Jewelsmith 9.1 is a powerful CAD/CAM software that allows you to design and create stunning jewelry pieces with ease. Whether you are a professional jeweler or a hobbyist, Artcam Jewelsmith 9.1 can help you turn your ideas into reality.

-

Artcam Jewelsmith 9.1 Dongle Crack


Download File ☆☆☆☆☆ https://ssurll.com/2uzySC



-

However, Artcam Jewelsmith 9.1 is not a cheap software, and you may need a dongle to run it on your computer. A dongle is a small device that plugs into your USB port and acts as a security key for the software. Without a dongle, you may not be able to use all the features of Artcam Jewelsmith 9.1 or even run it at all.

-

Fortunately, there is a way to install and crack Artcam Jewelsmith 9.1 without a dongle and enjoy it for free. In this article, we will show you how to do it step by step, using a simple method that anyone can follow. All you need is a computer with Windows 7, 8, 8.1 or 10 (32/64 bit), an internet connection, and some patience.

-

Step 1: Download Artcam Jewelsmith 9.1

-

The first thing you need to do is to download Artcam Jewelsmith 9.1 from a reliable source. There are many websites that offer free downloads of Artcam Jewelsmith 9.1, but not all of them are safe or trustworthy. Some of them may contain viruses, malware, or fake files that can harm your computer or steal your personal information.

-

One of the best websites to download Artcam Jewelsmith 9.1 for free is usersdrive.com. This website has been tested and verified by many users who have successfully installed and cracked Artcam Jewelsmith 9.1 on their computers. The download link is updated regularly and works as of April 2023.

-

To download Artcam Jewelsmith 9.1 from usersdrive.com, follow these steps:

-

-
    -
  1. Click on this link: https://usersdrive.com/th58g0kri1cy.html
  2. -
  3. Wait for a few seconds until the download button appears.
  4. -
  5. Click on the download button and save the file on your computer.
  6. -
  7. The file size is about 2 GB, so it may take some time to download depending on your internet speed.
  8. -
-

Step 2: Install Artcam Jewelsmith 9.1

-

Once you have downloaded Artcam Jewelsmith 9.1, you need to install it on your computer. To do this, follow these steps:

-
    -
  1. Locate the downloaded file on your computer and extract it using WinRAR or any other software that can unzip files.
  2. -
  3. You should see a folder named "ArtCAM_2008_SP5" with several files inside.
  4. -
  5. Double-click on the file named "setup.exe" to start the installation process.
  6. -
  7. Follow the instructions on the screen and choose the language, destination folder, and components you want to install.
  8. -
  9. When prompted, enter the serial number "A9RJ1" (without quotes) and click Next.
  10. -
  11. Wait for the installation to finish and click Finish.
  12. -
-

Step 3: Crack Artcam Jewelsmith 9.1

-

The final step is to crack Artcam Jewelsmith 9.1 so that you can use it without a dongle and for free. To do this, follow these steps:

-
    -
  1. Go back to the folder where you extracted the downloaded file and open the subfolder named "Crack".
  2. -
  3. You should see two files named "ArtCAMPro.exe" and "Sentinel Protection Installer.exe".
  4. -
  5. Copy the file

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/CRACK Windows 7 Ultimate SP1 PT-BR X32 ISO.md b/spaces/contluForse/HuggingGPT/assets/CRACK Windows 7 Ultimate SP1 PT-BR X32 ISO.md deleted file mode 100644 index cf840e5cb74ade1f6968c908c51716487c231c76..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CRACK Windows 7 Ultimate SP1 PT-BR X32 ISO.md +++ /dev/null @@ -1,6 +0,0 @@ -

    CRACK Windows 7 Ultimate SP1 PT-BR X32 ISO


    Download Zip ⇒⇒⇒ https://ssurll.com/2uzxwM



    - -Windows 7 Lite Ultimate 1903 là bản windows 7 rút gọn chuẩn nhất. ... Format, ISO Télécharger Windows 10 1903 (x64) Architecture, x86 (32 bits) 28 ... Related Software Windows 10 Lite 32/64 PT-BR download Seeds 25 Peers 37 ... In fact, there are several ways to crack a Windows 10 Pro product key for ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/mlp.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/mlp.py deleted file mode 100644 index 05d076527cfb6f15bcf5f2830fa36777abbc5a1e..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/mlp.py +++ /dev/null @@ -1,108 +0,0 @@ -""" MLP module w/ dropout and configurable activation layer - -Hacked together by / Copyright 2020 Ross Wightman -""" -from torch import nn as nn - - -class Mlp(nn.Module): - """ MLP as used in Vision Transformer, MLP-Mixer and related networks - """ - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class GluMlp(nn.Module): - """ MLP w/ GLU style gating - See: https://arxiv.org/abs/1612.08083, https://arxiv.org/abs/2002.05202 - """ - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.Sigmoid, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - assert hidden_features % 2 == 0 - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features // 2, out_features) - self.drop = nn.Dropout(drop) - - def init_weights(self): - # override init of fc1 w/ gate portion set to weight near zero, bias=1 - fc1_mid = self.fc1.bias.shape[0] // 2 - nn.init.ones_(self.fc1.bias[fc1_mid:]) - nn.init.normal_(self.fc1.weight[fc1_mid:], std=1e-6) - - def forward(self, x): - x = self.fc1(x) - x, gates = x.chunk(2, dim=-1) - x = x * self.act(gates) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class GatedMlp(nn.Module): - """ MLP as used in gMLP - """ - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, - gate_layer=None, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - if gate_layer is not None: - assert hidden_features % 2 == 0 - self.gate = gate_layer(hidden_features) - hidden_features = hidden_features // 2 # FIXME base reduction on gate property? - else: - self.gate = nn.Identity() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.gate(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class ConvMlp(nn.Module): - """ MLP using 1x1 convs that keeps spatial dims - """ - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.ReLU, norm_layer=None, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, kernel_size=1, bias=True) - self.norm = norm_layer(hidden_features) if norm_layer else nn.Identity() - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, kernel_size=1, bias=True) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.norm(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - return x diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/engine/defaults.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/engine/defaults.py deleted file mode 100644 index 51d49148ca7b048402a63490bf7df83a43c65d9f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/engine/defaults.py +++ /dev/null @@ -1,715 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -This file contains components with some default boilerplate logic user may need -in training / testing. They will not work for everyone, but many users may find them useful. - -The behavior of functions/classes in this file is subject to change, -since they are meant to represent the "common default behavior" people need in their projects. -""" - -import argparse -import logging -import os -import sys -import weakref -from collections import OrderedDict -from typing import Optional -import torch -from fvcore.nn.precise_bn import get_bn_modules -from omegaconf import OmegaConf -from torch.nn.parallel import DistributedDataParallel - -import annotator.oneformer.detectron2.data.transforms as T -from annotator.oneformer.detectron2.checkpoint import DetectionCheckpointer -from annotator.oneformer.detectron2.config import CfgNode, LazyConfig -from annotator.oneformer.detectron2.data import ( - MetadataCatalog, - build_detection_test_loader, - build_detection_train_loader, -) -from annotator.oneformer.detectron2.evaluation import ( - DatasetEvaluator, - inference_on_dataset, - print_csv_format, - verify_results, -) -from annotator.oneformer.detectron2.modeling import build_model -from annotator.oneformer.detectron2.solver import build_lr_scheduler, build_optimizer -from annotator.oneformer.detectron2.utils import comm -from annotator.oneformer.detectron2.utils.collect_env import collect_env_info -from annotator.oneformer.detectron2.utils.env import seed_all_rng -from annotator.oneformer.detectron2.utils.events import CommonMetricPrinter, JSONWriter, TensorboardXWriter -from annotator.oneformer.detectron2.utils.file_io import PathManager -from annotator.oneformer.detectron2.utils.logger import setup_logger - -from . import hooks -from .train_loop import AMPTrainer, SimpleTrainer, TrainerBase - -__all__ = [ - "create_ddp_model", - "default_argument_parser", - "default_setup", - "default_writers", - "DefaultPredictor", - "DefaultTrainer", -] - - -def create_ddp_model(model, *, fp16_compression=False, **kwargs): - """ - Create a DistributedDataParallel model if there are >1 processes. - - Args: - model: a torch.nn.Module - fp16_compression: add fp16 compression hooks to the ddp object. - See more at https://pytorch.org/docs/stable/ddp_comm_hooks.html#torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook - kwargs: other arguments of :module:`torch.nn.parallel.DistributedDataParallel`. - """ # noqa - if comm.get_world_size() == 1: - return model - if "device_ids" not in kwargs: - kwargs["device_ids"] = [comm.get_local_rank()] - ddp = DistributedDataParallel(model, **kwargs) - if fp16_compression: - from torch.distributed.algorithms.ddp_comm_hooks import default as comm_hooks - - ddp.register_comm_hook(state=None, hook=comm_hooks.fp16_compress_hook) - return ddp - - -def default_argument_parser(epilog=None): - """ - Create a parser with some common arguments used by detectron2 users. - - Args: - epilog (str): epilog passed to ArgumentParser describing the usage. - - Returns: - argparse.ArgumentParser: - """ - parser = argparse.ArgumentParser( - epilog=epilog - or f""" -Examples: - -Run on single machine: - $ {sys.argv[0]} --num-gpus 8 --config-file cfg.yaml - -Change some config options: - $ {sys.argv[0]} --config-file cfg.yaml MODEL.WEIGHTS /path/to/weight.pth SOLVER.BASE_LR 0.001 - -Run on multiple machines: - (machine0)$ {sys.argv[0]} --machine-rank 0 --num-machines 2 --dist-url [--other-flags] - (machine1)$ {sys.argv[0]} --machine-rank 1 --num-machines 2 --dist-url [--other-flags] -""", - formatter_class=argparse.RawDescriptionHelpFormatter, - ) - parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file") - parser.add_argument( - "--resume", - action="store_true", - help="Whether to attempt to resume from the checkpoint directory. " - "See documentation of `DefaultTrainer.resume_or_load()` for what it means.", - ) - parser.add_argument("--eval-only", action="store_true", help="perform evaluation only") - parser.add_argument("--num-gpus", type=int, default=1, help="number of gpus *per machine*") - parser.add_argument("--num-machines", type=int, default=1, help="total number of machines") - parser.add_argument( - "--machine-rank", type=int, default=0, help="the rank of this machine (unique per machine)" - ) - - # PyTorch still may leave orphan processes in multi-gpu training. - # Therefore we use a deterministic way to obtain port, - # so that users are aware of orphan processes by seeing the port occupied. - port = 2**15 + 2**14 + hash(os.getuid() if sys.platform != "win32" else 1) % 2**14 - parser.add_argument( - "--dist-url", - default="tcp://127.0.0.1:{}".format(port), - help="initialization URL for pytorch distributed backend. See " - "https://pytorch.org/docs/stable/distributed.html for details.", - ) - parser.add_argument( - "opts", - help=""" -Modify config options at the end of the command. For Yacs configs, use -space-separated "PATH.KEY VALUE" pairs. -For python-based LazyConfig, use "path.key=value". - """.strip(), - default=None, - nargs=argparse.REMAINDER, - ) - return parser - - -def _try_get_key(cfg, *keys, default=None): - """ - Try select keys from cfg until the first key that exists. Otherwise return default. - """ - if isinstance(cfg, CfgNode): - cfg = OmegaConf.create(cfg.dump()) - for k in keys: - none = object() - p = OmegaConf.select(cfg, k, default=none) - if p is not none: - return p - return default - - -def _highlight(code, filename): - try: - import pygments - except ImportError: - return code - - from pygments.lexers import Python3Lexer, YamlLexer - from pygments.formatters import Terminal256Formatter - - lexer = Python3Lexer() if filename.endswith(".py") else YamlLexer() - code = pygments.highlight(code, lexer, Terminal256Formatter(style="monokai")) - return code - - -def default_setup(cfg, args): - """ - Perform some basic common setups at the beginning of a job, including: - - 1. Set up the detectron2 logger - 2. Log basic information about environment, cmdline arguments, and config - 3. Backup the config to the output directory - - Args: - cfg (CfgNode or omegaconf.DictConfig): the full config to be used - args (argparse.NameSpace): the command line arguments to be logged - """ - output_dir = _try_get_key(cfg, "OUTPUT_DIR", "output_dir", "train.output_dir") - if comm.is_main_process() and output_dir: - PathManager.mkdirs(output_dir) - - rank = comm.get_rank() - setup_logger(output_dir, distributed_rank=rank, name="fvcore") - logger = setup_logger(output_dir, distributed_rank=rank) - - logger.info("Rank of current process: {}. World size: {}".format(rank, comm.get_world_size())) - logger.info("Environment info:\n" + collect_env_info()) - - logger.info("Command line arguments: " + str(args)) - if hasattr(args, "config_file") and args.config_file != "": - logger.info( - "Contents of args.config_file={}:\n{}".format( - args.config_file, - _highlight(PathManager.open(args.config_file, "r").read(), args.config_file), - ) - ) - - if comm.is_main_process() and output_dir: - # Note: some of our scripts may expect the existence of - # config.yaml in output directory - path = os.path.join(output_dir, "config.yaml") - if isinstance(cfg, CfgNode): - logger.info("Running with full config:\n{}".format(_highlight(cfg.dump(), ".yaml"))) - with PathManager.open(path, "w") as f: - f.write(cfg.dump()) - else: - LazyConfig.save(cfg, path) - logger.info("Full config saved to {}".format(path)) - - # make sure each worker has a different, yet deterministic seed if specified - seed = _try_get_key(cfg, "SEED", "train.seed", default=-1) - seed_all_rng(None if seed < 0 else seed + rank) - - # cudnn benchmark has large overhead. It shouldn't be used considering the small size of - # typical validation set. - if not (hasattr(args, "eval_only") and args.eval_only): - torch.backends.cudnn.benchmark = _try_get_key( - cfg, "CUDNN_BENCHMARK", "train.cudnn_benchmark", default=False - ) - - -def default_writers(output_dir: str, max_iter: Optional[int] = None): - """ - Build a list of :class:`EventWriter` to be used. - It now consists of a :class:`CommonMetricPrinter`, - :class:`TensorboardXWriter` and :class:`JSONWriter`. - - Args: - output_dir: directory to store JSON metrics and tensorboard events - max_iter: the total number of iterations - - Returns: - list[EventWriter]: a list of :class:`EventWriter` objects. - """ - PathManager.mkdirs(output_dir) - return [ - # It may not always print what you want to see, since it prints "common" metrics only. - CommonMetricPrinter(max_iter), - JSONWriter(os.path.join(output_dir, "metrics.json")), - TensorboardXWriter(output_dir), - ] - - -class DefaultPredictor: - """ - Create a simple end-to-end predictor with the given config that runs on - single device for a single input image. - - Compared to using the model directly, this class does the following additions: - - 1. Load checkpoint from `cfg.MODEL.WEIGHTS`. - 2. Always take BGR image as the input and apply conversion defined by `cfg.INPUT.FORMAT`. - 3. Apply resizing defined by `cfg.INPUT.{MIN,MAX}_SIZE_TEST`. - 4. Take one input image and produce a single output, instead of a batch. - - This is meant for simple demo purposes, so it does the above steps automatically. - This is not meant for benchmarks or running complicated inference logic. - If you'd like to do anything more complicated, please refer to its source code as - examples to build and use the model manually. - - Attributes: - metadata (Metadata): the metadata of the underlying dataset, obtained from - cfg.DATASETS.TEST. - - Examples: - :: - pred = DefaultPredictor(cfg) - inputs = cv2.imread("input.jpg") - outputs = pred(inputs) - """ - - def __init__(self, cfg): - self.cfg = cfg.clone() # cfg can be modified by model - self.model = build_model(self.cfg) - self.model.eval() - if len(cfg.DATASETS.TEST): - self.metadata = MetadataCatalog.get(cfg.DATASETS.TEST[0]) - - checkpointer = DetectionCheckpointer(self.model) - checkpointer.load(cfg.MODEL.WEIGHTS) - - self.aug = T.ResizeShortestEdge( - [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST - ) - - self.input_format = cfg.INPUT.FORMAT - assert self.input_format in ["RGB", "BGR"], self.input_format - - def __call__(self, original_image): - """ - Args: - original_image (np.ndarray): an image of shape (H, W, C) (in BGR order). - - Returns: - predictions (dict): - the output of the model for one image only. - See :doc:`/tutorials/models` for details about the format. - """ - with torch.no_grad(): # https://github.com/sphinx-doc/sphinx/issues/4258 - # Apply pre-processing to image. - if self.input_format == "RGB": - # whether the model expects BGR inputs or RGB - original_image = original_image[:, :, ::-1] - height, width = original_image.shape[:2] - image = self.aug.get_transform(original_image).apply_image(original_image) - image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) - - inputs = {"image": image, "height": height, "width": width} - predictions = self.model([inputs])[0] - return predictions - - -class DefaultTrainer(TrainerBase): - """ - A trainer with default training logic. It does the following: - - 1. Create a :class:`SimpleTrainer` using model, optimizer, dataloader - defined by the given config. Create a LR scheduler defined by the config. - 2. Load the last checkpoint or `cfg.MODEL.WEIGHTS`, if exists, when - `resume_or_load` is called. - 3. Register a few common hooks defined by the config. - - It is created to simplify the **standard model training workflow** and reduce code boilerplate - for users who only need the standard training workflow, with standard features. - It means this class makes *many assumptions* about your training logic that - may easily become invalid in a new research. In fact, any assumptions beyond those made in the - :class:`SimpleTrainer` are too much for research. - - The code of this class has been annotated about restrictive assumptions it makes. - When they do not work for you, you're encouraged to: - - 1. Overwrite methods of this class, OR: - 2. Use :class:`SimpleTrainer`, which only does minimal SGD training and - nothing else. You can then add your own hooks if needed. OR: - 3. Write your own training loop similar to `tools/plain_train_net.py`. - - See the :doc:`/tutorials/training` tutorials for more details. - - Note that the behavior of this class, like other functions/classes in - this file, is not stable, since it is meant to represent the "common default behavior". - It is only guaranteed to work well with the standard models and training workflow in detectron2. - To obtain more stable behavior, write your own training logic with other public APIs. - - Examples: - :: - trainer = DefaultTrainer(cfg) - trainer.resume_or_load() # load last checkpoint or MODEL.WEIGHTS - trainer.train() - - Attributes: - scheduler: - checkpointer (DetectionCheckpointer): - cfg (CfgNode): - """ - - def __init__(self, cfg): - """ - Args: - cfg (CfgNode): - """ - super().__init__() - logger = logging.getLogger("detectron2") - if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2 - setup_logger() - cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size()) - - # Assume these objects must be constructed in this order. - model = self.build_model(cfg) - optimizer = self.build_optimizer(cfg, model) - data_loader = self.build_train_loader(cfg) - - model = create_ddp_model(model, broadcast_buffers=False) - self._trainer = (AMPTrainer if cfg.SOLVER.AMP.ENABLED else SimpleTrainer)( - model, data_loader, optimizer - ) - - self.scheduler = self.build_lr_scheduler(cfg, optimizer) - self.checkpointer = DetectionCheckpointer( - # Assume you want to save checkpoints together with logs/statistics - model, - cfg.OUTPUT_DIR, - trainer=weakref.proxy(self), - ) - self.start_iter = 0 - self.max_iter = cfg.SOLVER.MAX_ITER - self.cfg = cfg - - self.register_hooks(self.build_hooks()) - - def resume_or_load(self, resume=True): - """ - If `resume==True` and `cfg.OUTPUT_DIR` contains the last checkpoint (defined by - a `last_checkpoint` file), resume from the file. Resuming means loading all - available states (eg. optimizer and scheduler) and update iteration counter - from the checkpoint. ``cfg.MODEL.WEIGHTS`` will not be used. - - Otherwise, this is considered as an independent training. The method will load model - weights from the file `cfg.MODEL.WEIGHTS` (but will not load other states) and start - from iteration 0. - - Args: - resume (bool): whether to do resume or not - """ - self.checkpointer.resume_or_load(self.cfg.MODEL.WEIGHTS, resume=resume) - if resume and self.checkpointer.has_checkpoint(): - # The checkpoint stores the training iteration that just finished, thus we start - # at the next iteration - self.start_iter = self.iter + 1 - - def build_hooks(self): - """ - Build a list of default hooks, including timing, evaluation, - checkpointing, lr scheduling, precise BN, writing events. - - Returns: - list[HookBase]: - """ - cfg = self.cfg.clone() - cfg.defrost() - cfg.DATALOADER.NUM_WORKERS = 0 # save some memory and time for PreciseBN - - ret = [ - hooks.IterationTimer(), - hooks.LRScheduler(), - hooks.PreciseBN( - # Run at the same freq as (but before) evaluation. - cfg.TEST.EVAL_PERIOD, - self.model, - # Build a new data loader to not affect training - self.build_train_loader(cfg), - cfg.TEST.PRECISE_BN.NUM_ITER, - ) - if cfg.TEST.PRECISE_BN.ENABLED and get_bn_modules(self.model) - else None, - ] - - # Do PreciseBN before checkpointer, because it updates the model and need to - # be saved by checkpointer. - # This is not always the best: if checkpointing has a different frequency, - # some checkpoints may have more precise statistics than others. - if comm.is_main_process(): - ret.append(hooks.PeriodicCheckpointer(self.checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD)) - - def test_and_save_results(): - self._last_eval_results = self.test(self.cfg, self.model) - return self._last_eval_results - - # Do evaluation after checkpointer, because then if it fails, - # we can use the saved checkpoint to debug. - ret.append(hooks.EvalHook(cfg.TEST.EVAL_PERIOD, test_and_save_results)) - - if comm.is_main_process(): - # Here the default print/log frequency of each writer is used. - # run writers in the end, so that evaluation metrics are written - ret.append(hooks.PeriodicWriter(self.build_writers(), period=20)) - return ret - - def build_writers(self): - """ - Build a list of writers to be used using :func:`default_writers()`. - If you'd like a different list of writers, you can overwrite it in - your trainer. - - Returns: - list[EventWriter]: a list of :class:`EventWriter` objects. - """ - return default_writers(self.cfg.OUTPUT_DIR, self.max_iter) - - def train(self): - """ - Run training. - - Returns: - OrderedDict of results, if evaluation is enabled. Otherwise None. - """ - super().train(self.start_iter, self.max_iter) - if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process(): - assert hasattr( - self, "_last_eval_results" - ), "No evaluation results obtained during training!" - verify_results(self.cfg, self._last_eval_results) - return self._last_eval_results - - def run_step(self): - self._trainer.iter = self.iter - self._trainer.run_step() - - def state_dict(self): - ret = super().state_dict() - ret["_trainer"] = self._trainer.state_dict() - return ret - - def load_state_dict(self, state_dict): - super().load_state_dict(state_dict) - self._trainer.load_state_dict(state_dict["_trainer"]) - - @classmethod - def build_model(cls, cfg): - """ - Returns: - torch.nn.Module: - - It now calls :func:`detectron2.modeling.build_model`. - Overwrite it if you'd like a different model. - """ - model = build_model(cfg) - logger = logging.getLogger(__name__) - logger.info("Model:\n{}".format(model)) - return model - - @classmethod - def build_optimizer(cls, cfg, model): - """ - Returns: - torch.optim.Optimizer: - - It now calls :func:`detectron2.solver.build_optimizer`. - Overwrite it if you'd like a different optimizer. - """ - return build_optimizer(cfg, model) - - @classmethod - def build_lr_scheduler(cls, cfg, optimizer): - """ - It now calls :func:`detectron2.solver.build_lr_scheduler`. - Overwrite it if you'd like a different scheduler. - """ - return build_lr_scheduler(cfg, optimizer) - - @classmethod - def build_train_loader(cls, cfg): - """ - Returns: - iterable - - It now calls :func:`detectron2.data.build_detection_train_loader`. - Overwrite it if you'd like a different data loader. - """ - return build_detection_train_loader(cfg) - - @classmethod - def build_test_loader(cls, cfg, dataset_name): - """ - Returns: - iterable - - It now calls :func:`detectron2.data.build_detection_test_loader`. - Overwrite it if you'd like a different data loader. - """ - return build_detection_test_loader(cfg, dataset_name) - - @classmethod - def build_evaluator(cls, cfg, dataset_name): - """ - Returns: - DatasetEvaluator or None - - It is not implemented by default. - """ - raise NotImplementedError( - """ -If you want DefaultTrainer to automatically run evaluation, -please implement `build_evaluator()` in subclasses (see train_net.py for example). -Alternatively, you can call evaluation functions yourself (see Colab balloon tutorial for example). -""" - ) - - @classmethod - def test(cls, cfg, model, evaluators=None): - """ - Evaluate the given model. The given model is expected to already contain - weights to evaluate. - - Args: - cfg (CfgNode): - model (nn.Module): - evaluators (list[DatasetEvaluator] or None): if None, will call - :meth:`build_evaluator`. Otherwise, must have the same length as - ``cfg.DATASETS.TEST``. - - Returns: - dict: a dict of result metrics - """ - logger = logging.getLogger(__name__) - if isinstance(evaluators, DatasetEvaluator): - evaluators = [evaluators] - if evaluators is not None: - assert len(cfg.DATASETS.TEST) == len(evaluators), "{} != {}".format( - len(cfg.DATASETS.TEST), len(evaluators) - ) - - results = OrderedDict() - for idx, dataset_name in enumerate(cfg.DATASETS.TEST): - data_loader = cls.build_test_loader(cfg, dataset_name) - # When evaluators are passed in as arguments, - # implicitly assume that evaluators can be created before data_loader. - if evaluators is not None: - evaluator = evaluators[idx] - else: - try: - evaluator = cls.build_evaluator(cfg, dataset_name) - except NotImplementedError: - logger.warn( - "No evaluator found. Use `DefaultTrainer.test(evaluators=)`, " - "or implement its `build_evaluator` method." - ) - results[dataset_name] = {} - continue - results_i = inference_on_dataset(model, data_loader, evaluator) - results[dataset_name] = results_i - if comm.is_main_process(): - assert isinstance( - results_i, dict - ), "Evaluator must return a dict on the main process. Got {} instead.".format( - results_i - ) - logger.info("Evaluation results for {} in csv format:".format(dataset_name)) - print_csv_format(results_i) - - if len(results) == 1: - results = list(results.values())[0] - return results - - @staticmethod - def auto_scale_workers(cfg, num_workers: int): - """ - When the config is defined for certain number of workers (according to - ``cfg.SOLVER.REFERENCE_WORLD_SIZE``) that's different from the number of - workers currently in use, returns a new cfg where the total batch size - is scaled so that the per-GPU batch size stays the same as the - original ``IMS_PER_BATCH // REFERENCE_WORLD_SIZE``. - - Other config options are also scaled accordingly: - * training steps and warmup steps are scaled inverse proportionally. - * learning rate are scaled proportionally, following :paper:`ImageNet in 1h`. - - For example, with the original config like the following: - - .. code-block:: yaml - - IMS_PER_BATCH: 16 - BASE_LR: 0.1 - REFERENCE_WORLD_SIZE: 8 - MAX_ITER: 5000 - STEPS: (4000,) - CHECKPOINT_PERIOD: 1000 - - When this config is used on 16 GPUs instead of the reference number 8, - calling this method will return a new config with: - - .. code-block:: yaml - - IMS_PER_BATCH: 32 - BASE_LR: 0.2 - REFERENCE_WORLD_SIZE: 16 - MAX_ITER: 2500 - STEPS: (2000,) - CHECKPOINT_PERIOD: 500 - - Note that both the original config and this new config can be trained on 16 GPUs. - It's up to user whether to enable this feature (by setting ``REFERENCE_WORLD_SIZE``). - - Returns: - CfgNode: a new config. Same as original if ``cfg.SOLVER.REFERENCE_WORLD_SIZE==0``. - """ - old_world_size = cfg.SOLVER.REFERENCE_WORLD_SIZE - if old_world_size == 0 or old_world_size == num_workers: - return cfg - cfg = cfg.clone() - frozen = cfg.is_frozen() - cfg.defrost() - - assert ( - cfg.SOLVER.IMS_PER_BATCH % old_world_size == 0 - ), "Invalid REFERENCE_WORLD_SIZE in config!" - scale = num_workers / old_world_size - bs = cfg.SOLVER.IMS_PER_BATCH = int(round(cfg.SOLVER.IMS_PER_BATCH * scale)) - lr = cfg.SOLVER.BASE_LR = cfg.SOLVER.BASE_LR * scale - max_iter = cfg.SOLVER.MAX_ITER = int(round(cfg.SOLVER.MAX_ITER / scale)) - warmup_iter = cfg.SOLVER.WARMUP_ITERS = int(round(cfg.SOLVER.WARMUP_ITERS / scale)) - cfg.SOLVER.STEPS = tuple(int(round(s / scale)) for s in cfg.SOLVER.STEPS) - cfg.TEST.EVAL_PERIOD = int(round(cfg.TEST.EVAL_PERIOD / scale)) - cfg.SOLVER.CHECKPOINT_PERIOD = int(round(cfg.SOLVER.CHECKPOINT_PERIOD / scale)) - cfg.SOLVER.REFERENCE_WORLD_SIZE = num_workers # maintain invariant - logger = logging.getLogger(__name__) - logger.info( - f"Auto-scaling the config to batch_size={bs}, learning_rate={lr}, " - f"max_iter={max_iter}, warmup={warmup_iter}." - ) - - if frozen: - cfg.freeze() - return cfg - - -# Access basic attributes from the underlying trainer -for _attr in ["model", "data_loader", "optimizer"]: - setattr( - DefaultTrainer, - _attr, - property( - # getter - lambda self, x=_attr: getattr(self._trainer, x), - # setter - lambda self, value, x=_attr: setattr(self._trainer, x, value), - ), - ) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/msdeformattn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/msdeformattn.py deleted file mode 100644 index 007051d713fd89a622154f5e0edc9902627cca14..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/msdeformattn.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_ -from torch.cuda.amp import autocast - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import Conv2d, ShapeSpec, get_norm -from annotator.oneformer.detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer_decoder.position_encoding import PositionEmbeddingSine -from ..transformer_decoder.transformer import _get_clones, _get_activation_fn -from .ops.modules import MSDeformAttn - - -# MSDeformAttn Transformer encoder in deformable detr -class MSDeformAttnTransformerEncoderOnly(nn.Module): - def __init__(self, d_model=256, nhead=8, - num_encoder_layers=6, dim_feedforward=1024, dropout=0.1, - activation="relu", - num_feature_levels=4, enc_n_points=4, - ): - super().__init__() - - self.d_model = d_model - self.nhead = nhead - - encoder_layer = MSDeformAttnTransformerEncoderLayer(d_model, dim_feedforward, - dropout, activation, - num_feature_levels, nhead, enc_n_points) - self.encoder = MSDeformAttnTransformerEncoder(encoder_layer, num_encoder_layers) - - self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model)) - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MSDeformAttn): - m._reset_parameters() - normal_(self.level_embed) - - def get_valid_ratio(self, mask): - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def forward(self, srcs, pos_embeds): - masks = [torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) for x in srcs] - # prepare input for encoder - src_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)): - bs, c, h, w = src.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - src = src.flatten(2).transpose(1, 2) - mask = mask.flatten(1) - pos_embed = pos_embed.flatten(2).transpose(1, 2) - lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1) - lvl_pos_embed_flatten.append(lvl_pos_embed) - src_flatten.append(src) - mask_flatten.append(mask) - src_flatten = torch.cat(src_flatten, 1) - mask_flatten = torch.cat(mask_flatten, 1) - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) - spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device) - level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1) - - # encoder - memory = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, lvl_pos_embed_flatten, mask_flatten) - - return memory, spatial_shapes, level_start_index, valid_ratios - - -class MSDeformAttnTransformerEncoderLayer(nn.Module): - def __init__(self, - d_model=256, d_ffn=1024, - dropout=0.1, activation="relu", - n_levels=4, n_heads=8, n_points=4): - super().__init__() - - # self attention - self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation) - self.dropout2 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout3 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, src): - src2 = self.linear2(self.dropout2(self.activation(self.linear1(src)))) - src = src + self.dropout3(src2) - src = self.norm2(src) - return src - - def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, padding_mask=None): - # self attention - src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask) - src = src + self.dropout1(src2) - src = self.norm1(src) - - # ffn - src = self.forward_ffn(src) - - return src - - -class MSDeformAttnTransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers): - super().__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - reference_points_list = [] - for lvl, (H_, W_) in enumerate(spatial_shapes): - - ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device), - torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device)) - ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_) - ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None): - output = src - reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device) - for _, layer in enumerate(self.layers): - output = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask) - - return output - - -@SEM_SEG_HEADS_REGISTRY.register() -class MSDeformAttnPixelDecoder(nn.Module): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - transformer_dropout: float, - transformer_nheads: int, - transformer_dim_feedforward: int, - transformer_enc_layers: int, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - # deformable transformer encoder args - transformer_in_features: List[str], - common_stride: int, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - transformer_dropout: dropout probability in transformer - transformer_nheads: number of heads in transformer - transformer_dim_feedforward: dimension of feedforward network - transformer_enc_layers: number of transformer encoder layers - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__() - transformer_input_shape = { - k: v for k, v in input_shape.items() if k in transformer_in_features - } - - # this is the input shape of pixel decoder - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - self.feature_strides = [v.stride for k, v in input_shape] - self.feature_channels = [v.channels for k, v in input_shape] - - # this is the input shape of transformer encoder (could use less features than pixel decoder - transformer_input_shape = sorted(transformer_input_shape.items(), key=lambda x: x[1].stride) - self.transformer_in_features = [k for k, v in transformer_input_shape] # starting from "res2" to "res5" - transformer_in_channels = [v.channels for k, v in transformer_input_shape] - self.transformer_feature_strides = [v.stride for k, v in transformer_input_shape] # to decide extra FPN layers - - self.transformer_num_feature_levels = len(self.transformer_in_features) - if self.transformer_num_feature_levels > 1: - input_proj_list = [] - # from low resolution to high resolution (res5 -> res2) - for in_channels in transformer_in_channels[::-1]: - input_proj_list.append(nn.Sequential( - nn.Conv2d(in_channels, conv_dim, kernel_size=1), - nn.GroupNorm(32, conv_dim), - )) - self.input_proj = nn.ModuleList(input_proj_list) - else: - self.input_proj = nn.ModuleList([ - nn.Sequential( - nn.Conv2d(transformer_in_channels[-1], conv_dim, kernel_size=1), - nn.GroupNorm(32, conv_dim), - )]) - - for proj in self.input_proj: - nn.init.xavier_uniform_(proj[0].weight, gain=1) - nn.init.constant_(proj[0].bias, 0) - - self.transformer = MSDeformAttnTransformerEncoderOnly( - d_model=conv_dim, - dropout=transformer_dropout, - nhead=transformer_nheads, - dim_feedforward=transformer_dim_feedforward, - num_encoder_layers=transformer_enc_layers, - num_feature_levels=self.transformer_num_feature_levels, - ) - N_steps = conv_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - self.mask_dim = mask_dim - # use 1x1 conv instead - self.mask_features = Conv2d( - conv_dim, - mask_dim, - kernel_size=1, - stride=1, - padding=0, - ) - weight_init.c2_xavier_fill(self.mask_features) - - self.oneformer_num_feature_levels = 3 # always use 3 scales - self.common_stride = common_stride - - # extra fpn levels - stride = min(self.transformer_feature_strides) - self.num_fpn_levels = int(np.log2(stride) - np.log2(self.common_stride)) - - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(self.feature_channels[:self.num_fpn_levels]): - lateral_norm = get_norm(norm, conv_dim) - output_norm = get_norm(norm, conv_dim) - - lateral_conv = Conv2d( - in_channels, conv_dim, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - self.add_module("adapter_{}".format(idx + 1), lateral_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = {} - ret["input_shape"] = { - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - } - ret["conv_dim"] = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["norm"] = cfg.MODEL.SEM_SEG_HEAD.NORM - ret["transformer_dropout"] = cfg.MODEL.ONE_FORMER.DROPOUT - ret["transformer_nheads"] = cfg.MODEL.ONE_FORMER.NHEADS - # ret["transformer_dim_feedforward"] = cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD - ret["transformer_dim_feedforward"] = 1024 # use 1024 for deformable transformer encoder - ret[ - "transformer_enc_layers" - ] = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS # a separate config - ret["transformer_in_features"] = cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES - ret["common_stride"] = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE - return ret - - @autocast(enabled=False) - def forward_features(self, features): - srcs = [] - pos = [] - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.transformer_in_features[::-1]): - x = features[f].float() # deformable detr does not support half precision - srcs.append(self.input_proj[idx](x)) - pos.append(self.pe_layer(x)) - - y, spatial_shapes, level_start_index, valid_ratios = self.transformer(srcs, pos) - bs = y.shape[0] - - split_size_or_sections = [None] * self.transformer_num_feature_levels - for i in range(self.transformer_num_feature_levels): - if i < self.transformer_num_feature_levels - 1: - split_size_or_sections[i] = level_start_index[i + 1] - level_start_index[i] - else: - split_size_or_sections[i] = y.shape[1] - level_start_index[i] - y = torch.split(y, split_size_or_sections, dim=1) - - out = [] - multi_scale_features = [] - num_cur_levels = 0 - for i, z in enumerate(y): - out.append(z.transpose(1, 2).view(bs, -1, spatial_shapes[i][0], spatial_shapes[i][1])) - - # append `out` with extra FPN levels - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[:self.num_fpn_levels][::-1]): - x = features[f].float() - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(out[-1], size=cur_fpn.shape[-2:], mode="bilinear", align_corners=False) - y = output_conv(y) - out.append(y) - - for o in out: - if num_cur_levels < self.oneformer_num_feature_levels: - multi_scale_features.append(o) - num_cur_levels += 1 - - return self.mask_features(out[-1]), out[0], multi_scale_features, spatial_shapes, level_start_index diff --git a/spaces/cozyanduofen/bingo/src/components/chat-suggestions.tsx b/spaces/cozyanduofen/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
    -
    - - { - currentSuggestions.map(suggestion => ( - - )) - } -
    -
    - ) : null -} diff --git a/spaces/crimeacs/phase-hunter/phasehunter/utils.py b/spaces/crimeacs/phase-hunter/phasehunter/utils.py deleted file mode 100644 index 62b9aaf5454b51cb9d616203719d12b1dd3e9874..0000000000000000000000000000000000000000 --- a/spaces/crimeacs/phase-hunter/phasehunter/utils.py +++ /dev/null @@ -1,93 +0,0 @@ -import numpy as np -from scipy.interpolate import griddata - - -def bin_distances(distances, bin_size=10): - # Bin the distances into groups of `bin_size` kilometers - binned_distances = {} - for i, distance in enumerate(distances): - bin_index = distance // bin_size - if bin_index not in binned_distances: - binned_distances[bin_index] = (distance, i) - elif i < binned_distances[bin_index][1]: - binned_distances[bin_index] = (distance, i) - - # Select the first distance in each bin and its index - first_distances = [] - for bin_index in binned_distances: - first_distance, first_distance_index = binned_distances[bin_index] - first_distances.append(first_distance_index) - - return first_distances - - -def interpolate_vel_model( - velocity_model, - initial_velocity, - lat_values, - lon_values, - depth_values, - n_lat, - n_lon, - n_depth, -): - # Create a mask for points with the initial velocity - initial_velocity_mask = velocity_model == initial_velocity - - # Find the indices of points with non-initial velocities - non_initial_velocity_indices = np.argwhere(~initial_velocity_mask) - - # Extract the coordinates and corresponding velocities of the known points - known_points = np.column_stack( - [ - lat_values[non_initial_velocity_indices[:, 0]], - lon_values[non_initial_velocity_indices[:, 1]], - depth_values[non_initial_velocity_indices[:, 2]], - ] - ) - - # Find the maximum depth in the known_points - max_known_depth = np.max(known_points[:, 2]) - - known_velocities = velocity_model[~initial_velocity_mask] - - # Create a grid of points for the entire volume - grid_points = ( - np.array(np.meshgrid(lat_values, lon_values, depth_values, indexing="ij")) - .reshape(3, -1) - .T - ) - - # Create a mask for grid points that are deeper than the maximum known depth - depth_mask = grid_points[:, 2] <= max_known_depth - - # Interpolate the velocities at the grid points - interpolated_velocities = griddata( - known_points, known_velocities, grid_points[depth_mask], method="linear" - ) - - # Fill nan values with the nearest known velocities - interpolated_velocities_filled = griddata( - known_points, known_velocities, grid_points[depth_mask], method="nearest" - ) - interpolated_velocities[ - np.isnan(interpolated_velocities) - ] = interpolated_velocities_filled[np.isnan(interpolated_velocities)] - - # Initialize an array with the same length as grid_points and fill it with nan values - interpolated_velocities_with_depth_limit = np.full(grid_points.shape[0], np.nan) - - # Update the array with the interpolated velocities for the masked grid points - interpolated_velocities_with_depth_limit[depth_mask] = interpolated_velocities - - # Reshape the interpolated velocities to match the shape of the velocity_model - interpolated_velocity_model = interpolated_velocities_with_depth_limit.reshape( - n_lat, n_lon, n_depth - ) - - return interpolated_velocity_model - - -# Function to find the closest index for a given value in an array -def find_closest_index(array, value): - return np.argmin(np.abs(array - value)) diff --git a/spaces/cvlab/zero123-live/ldm/util.py b/spaces/cvlab/zero123-live/ldm/util.py deleted file mode 100644 index d71f7d9643833ba5d74e04a3d8bb6d7fd4b32303..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/ldm/util.py +++ /dev/null @@ -1,275 +0,0 @@ -import importlib - -import torchvision -import torch -from torch import optim -import numpy as np - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - -import os -import numpy as np -import matplotlib.pyplot as plt -from PIL import Image -import torch -import time -import cv2 -from carvekit.api.high import HiInterface -import PIL - -def pil_rectangle_crop(im): - width, height = im.size # Get dimensions - - if width <= height: - left = 0 - right = width - top = (height - width)/2 - bottom = (height + width)/2 - else: - - top = 0 - bottom = height - left = (width - height) / 2 - bottom = (width + height) / 2 - - # Crop the center of the image - im = im.crop((left, top, right, bottom)) - return im - -def add_margin(pil_img, color, size=256): - width, height = pil_img.size - result = Image.new(pil_img.mode, (size, size), color) - result.paste(pil_img, ((size - width) // 2, (size - height) // 2)) - return result - - -def create_carvekit_interface(): - # Check doc strings for more information - interface = HiInterface(object_type="object", # Can be "object" or "hairs-like". - batch_size_seg=5, - batch_size_matting=1, - device='cuda' if torch.cuda.is_available() else 'cpu', - seg_mask_size=640, # Use 640 for Tracer B7 and 320 for U2Net - matting_mask_size=2048, - trimap_prob_threshold=231, - trimap_dilation=30, - trimap_erosion_iters=5, - fp16=False) - - return interface - - -def load_and_preprocess(interface, input_im): - ''' - :param input_im (PIL Image). - :return image (H, W, 3) array in [0, 1]. - ''' - # See https://github.com/Ir1d/image-background-remove-tool - image = input_im.convert('RGB') - - image_without_background = interface([image])[0] - image_without_background = np.array(image_without_background) - est_seg = image_without_background > 127 - image = np.array(image) - foreground = est_seg[:, : , -1].astype(np.bool_) - image[~foreground] = [255., 255., 255.] - x, y, w, h = cv2.boundingRect(foreground.astype(np.uint8)) - image = image[y:y+h, x:x+w, :] - image = PIL.Image.fromarray(np.array(image)) - - # resize image such that long edge is 512 - image.thumbnail([200, 200], Image.Resampling.LANCZOS) - image = add_margin(image, (255, 255, 255), size=256) - image = np.array(image) - - return image - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('data/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x,torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -class AdamWwithEMAandWings(optim.Optimizer): - # credit to https://gist.github.com/crowsonkb/65f7265353f403714fce3b2595e0b298 - def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8, # TODO: check hyperparameters before using - weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999, # ema decay to match previous code - ema_power=1., param_names=()): - """AdamW that saves EMA versions of the parameters.""" - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - if not 0.0 <= ema_decay <= 1.0: - raise ValueError("Invalid ema_decay value: {}".format(ema_decay)) - defaults = dict(lr=lr, betas=betas, eps=eps, - weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay, - ema_power=ema_power, param_names=param_names) - super().__init__(params, defaults) - - def __setstate__(self, state): - super().__setstate__(state) - for group in self.param_groups: - group.setdefault('amsgrad', False) - - @torch.no_grad() - def step(self, closure=None): - """Performs a single optimization step. - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - with torch.enable_grad(): - loss = closure() - - for group in self.param_groups: - params_with_grad = [] - grads = [] - exp_avgs = [] - exp_avg_sqs = [] - ema_params_with_grad = [] - state_sums = [] - max_exp_avg_sqs = [] - state_steps = [] - amsgrad = group['amsgrad'] - beta1, beta2 = group['betas'] - ema_decay = group['ema_decay'] - ema_power = group['ema_power'] - - for p in group['params']: - if p.grad is None: - continue - params_with_grad.append(p) - if p.grad.is_sparse: - raise RuntimeError('AdamW does not support sparse gradients') - grads.append(p.grad) - - state = self.state[p] - - # State initialization - if len(state) == 0: - state['step'] = 0 - # Exponential moving average of gradient values - state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of squared gradient values - state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of parameter values - state['param_exp_avg'] = p.detach().float().clone() - - exp_avgs.append(state['exp_avg']) - exp_avg_sqs.append(state['exp_avg_sq']) - ema_params_with_grad.append(state['param_exp_avg']) - - if amsgrad: - max_exp_avg_sqs.append(state['max_exp_avg_sq']) - - # update the steps for each param group update - state['step'] += 1 - # record the step after step update - state_steps.append(state['step']) - - optim._functional.adamw(params_with_grad, - grads, - exp_avgs, - exp_avg_sqs, - max_exp_avg_sqs, - state_steps, - amsgrad=amsgrad, - beta1=beta1, - beta2=beta2, - lr=group['lr'], - weight_decay=group['weight_decay'], - eps=group['eps'], - maximize=False) - - cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power) - for param, ema_param in zip(params_with_grad, ema_params_with_grad): - ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay) - - return loss \ No newline at end of file diff --git a/spaces/dachenchen/HiWantJoin/run_macOS.command b/spaces/dachenchen/HiWantJoin/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/dachenchen/HiWantJoin/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/base_model.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/base_model.py deleted file mode 100644 index cfe64a7f739ad8f8cfbf3073a2bf49e1468127fd..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/base_model.py +++ /dev/null @@ -1,316 +0,0 @@ -"""This script defines the base network model for Deep3DFaceRecon_pytorch -""" - -import os -import numpy as np -import torch -from collections import OrderedDict -from abc import ABC, abstractmethod -from . import networks - - -class BaseModel(ABC): - """This class is an abstract base class (ABC) for models. - To create a subclass, you need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate losses, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - """ - - def __init__(self, opt): - """Initialize the BaseModel class. - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - - When creating your custom class, you need to implement your own initialization. - In this fucntion, you should first call - Then, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): specify the images that you want to display and save. - -- self.visual_names (str list): define networks used in our training. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - """ - self.opt = opt - self.isTrain = False - self.device = torch.device('cpu') - self.save_dir = " " # os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir - self.loss_names = [] - self.model_names = [] - self.visual_names = [] - self.parallel_names = [] - self.optimizers = [] - self.image_paths = [] - self.metric = 0 # used for learning rate policy 'plateau' - - @staticmethod - def dict_grad_hook_factory(add_func=lambda x: x): - saved_dict = dict() - - def hook_gen(name): - def grad_hook(grad): - saved_vals = add_func(grad) - saved_dict[name] = saved_vals - return grad_hook - return hook_gen, saved_dict - - @staticmethod - def modify_commandline_options(parser, is_train): - """Add new model-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - return parser - - @abstractmethod - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input (dict): includes the data itself and its metadata information. - """ - pass - - @abstractmethod - def forward(self): - """Run forward pass; called by both functions and .""" - pass - - @abstractmethod - def optimize_parameters(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - pass - - def setup(self, opt): - """Load and print networks; create schedulers - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - if self.isTrain: - self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers] - - if not self.isTrain or opt.continue_train: - load_suffix = opt.epoch - self.load_networks(load_suffix) - - - # self.print_networks(opt.verbose) - - def parallelize(self, convert_sync_batchnorm=True): - if not self.opt.use_ddp: - for name in self.parallel_names: - if isinstance(name, str): - module = getattr(self, name) - setattr(self, name, module.to(self.device)) - else: - for name in self.model_names: - if isinstance(name, str): - module = getattr(self, name) - if convert_sync_batchnorm: - module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module) - setattr(self, name, torch.nn.parallel.DistributedDataParallel(module.to(self.device), - device_ids=[self.device.index], - find_unused_parameters=True, broadcast_buffers=True)) - - # DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient. - for name in self.parallel_names: - if isinstance(name, str) and name not in self.model_names: - module = getattr(self, name) - setattr(self, name, module.to(self.device)) - - # put state_dict of optimizer to gpu device - if self.opt.phase != 'test': - if self.opt.continue_train: - for optim in self.optimizers: - for state in optim.state.values(): - for k, v in state.items(): - if isinstance(v, torch.Tensor): - state[k] = v.to(self.device) - - def data_dependent_initialize(self, data): - pass - - def train(self): - """Make models train mode""" - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - net.train() - - def eval(self): - """Make models eval mode""" - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - net.eval() - - def test(self): - """Forward function used in test time. - - This function wraps function in no_grad() so we don't save intermediate steps for backprop - It also calls to produce additional visualization results - """ - with torch.no_grad(): - self.forward() - self.compute_visuals() - - def compute_visuals(self): - """Calculate additional output images for visdom and HTML visualization""" - pass - - def get_image_paths(self, name='A'): - """ Return image paths that are used to load current data""" - return self.image_paths if name =='A' else self.image_paths_B - - def update_learning_rate(self): - """Update learning rates for all the networks; called at the end of every epoch""" - for scheduler in self.schedulers: - if self.opt.lr_policy == 'plateau': - scheduler.step(self.metric) - else: - scheduler.step() - - lr = self.optimizers[0].param_groups[0]['lr'] - print('learning rate = %.7f' % lr) - - def get_current_visuals(self): - """Return visualization images. train.py will display these images with visdom, and save the images to a HTML""" - visual_ret = OrderedDict() - for name in self.visual_names: - if isinstance(name, str): - visual_ret[name] = getattr(self, name)[:, :3, ...] - return visual_ret - - def get_current_losses(self): - """Return traning losses / errors. train.py will print out these errors on console, and save them to a file""" - errors_ret = OrderedDict() - for name in self.loss_names: - if isinstance(name, str): - errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number - return errors_ret - - def save_networks(self, epoch): - """Save all the networks to the disk. - - Parameters: - epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name) - """ - if not os.path.isdir(self.save_dir): - os.makedirs(self.save_dir) - - save_filename = 'epoch_%s.pth' % (epoch) - save_path = os.path.join(self.save_dir, save_filename) - - save_dict = {} - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - if isinstance(net, torch.nn.DataParallel) or isinstance(net, - torch.nn.parallel.DistributedDataParallel): - net = net.module - save_dict[name] = net.state_dict() - - - for i, optim in enumerate(self.optimizers): - save_dict['opt_%02d'%i] = optim.state_dict() - - for i, sched in enumerate(self.schedulers): - save_dict['sched_%02d'%i] = sched.state_dict() - - torch.save(save_dict, save_path) - - def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0): - """Fix InstanceNorm checkpoints incompatibility (prior to 0.4)""" - key = keys[i] - if i + 1 == len(keys): # at the end, pointing to a parameter/buffer - if module.__class__.__name__.startswith('InstanceNorm') and \ - (key == 'running_mean' or key == 'running_var'): - if getattr(module, key) is None: - state_dict.pop('.'.join(keys)) - if module.__class__.__name__.startswith('InstanceNorm') and \ - (key == 'num_batches_tracked'): - state_dict.pop('.'.join(keys)) - else: - self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1) - - def load_networks(self, epoch): - """Load all the networks from the disk. - - Parameters: - epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name) - """ - if self.opt.isTrain and self.opt.pretrained_name is not None: - load_dir = os.path.join(self.opt.checkpoints_dir, self.opt.pretrained_name) - else: - load_dir = self.save_dir - load_filename = 'epoch_%s.pth' % (epoch) - load_path = os.path.join(load_dir, load_filename) - state_dict = torch.load(load_path, map_location=self.device) - print('loading the model from %s' % load_path) - - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - if isinstance(net, torch.nn.DataParallel): - net = net.module - net.load_state_dict(state_dict[name]) - - if self.opt.phase != 'test': - if self.opt.continue_train: - print('loading the optim from %s' % load_path) - for i, optim in enumerate(self.optimizers): - optim.load_state_dict(state_dict['opt_%02d'%i]) - - try: - print('loading the sched from %s' % load_path) - for i, sched in enumerate(self.schedulers): - sched.load_state_dict(state_dict['sched_%02d'%i]) - except: - print('Failed to load schedulers, set schedulers according to epoch count manually') - for i, sched in enumerate(self.schedulers): - sched.last_epoch = self.opt.epoch_count - 1 - - - - - def print_networks(self, verbose): - """Print the total number of parameters in the network and (if verbose) network architecture - - Parameters: - verbose (bool) -- if verbose: print the network architecture - """ - print('---------- Networks initialized -------------') - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - if verbose: - print(net) - print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6)) - print('-----------------------------------------------') - - def set_requires_grad(self, nets, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary computations - Parameters: - nets (network list) -- a list of networks - requires_grad (bool) -- whether the networks require gradients or not - """ - if not isinstance(nets, list): - nets = [nets] - for net in nets: - if net is not None: - for param in net.parameters(): - param.requires_grad = requires_grad - - def generate_visuals_for_evaluation(self, data, mode): - return {} diff --git a/spaces/danterivers/music-generation-samples/CHANGELOG.md b/spaces/danterivers/music-generation-samples/CHANGELOG.md deleted file mode 100644 index a685bcae80d0c64e64f5f51a9b9aa9245cec4b9e..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/CHANGELOG.md +++ /dev/null @@ -1,9 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.1a] - TBD - -Initial release, with model evaluation only. \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/__init__.py deleted file mode 100644 index b394643122371557ae59a59a5391361a48172e3c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/__init__.py +++ /dev/null @@ -1,120 +0,0 @@ -from gradio.components.annotated_image import AnnotatedImage -from gradio.components.audio import Audio -from gradio.components.bar_plot import BarPlot -from gradio.components.base import ( - Column, - Component, - Form, - FormComponent, - IOComponent, - Row, - _Keywords, - component, - get_component_instance, -) -from gradio.components.button import Button -from gradio.components.carousel import Carousel -from gradio.components.chatbot import Chatbot -from gradio.components.checkbox import Checkbox -from gradio.components.checkboxgroup import CheckboxGroup -from gradio.components.clear_button import ClearButton -from gradio.components.code import Code -from gradio.components.color_picker import ColorPicker -from gradio.components.dataframe import Dataframe -from gradio.components.dataset import Dataset -from gradio.components.dropdown import Dropdown -from gradio.components.duplicate_button import DuplicateButton -from gradio.components.file import File -from gradio.components.gallery import Gallery -from gradio.components.highlighted_text import HighlightedText -from gradio.components.html import HTML -from gradio.components.image import Image -from gradio.components.interpretation import Interpretation -from gradio.components.json_component import JSON -from gradio.components.label import Label -from gradio.components.line_plot import LinePlot -from gradio.components.login_button import LoginButton -from gradio.components.logout_button import LogoutButton -from gradio.components.markdown import Markdown -from gradio.components.model3d import Model3D -from gradio.components.number import Number -from gradio.components.plot import Plot -from gradio.components.radio import Radio -from gradio.components.scatter_plot import ScatterPlot -from gradio.components.slider import Slider -from gradio.components.state import State, Variable -from gradio.components.status_tracker import StatusTracker -from gradio.components.textbox import Textbox -from gradio.components.timeseries import Timeseries -from gradio.components.upload_button import UploadButton -from gradio.components.video import Video - -Text = Textbox -DataFrame = Dataframe -Highlightedtext = HighlightedText -Annotatedimage = AnnotatedImage -Highlight = HighlightedText -Checkboxgroup = CheckboxGroup -TimeSeries = Timeseries -Json = JSON - -__all__ = [ - "Audio", - "BarPlot", - "Button", - "Carousel", - "Chatbot", - "ClearButton", - "Component", - "component", - "get_component_instance", - "_Keywords", - "Checkbox", - "CheckboxGroup", - "Code", - "ColorPicker", - "Column", - "Dataframe", - "DataFrame", - "Dataset", - "DuplicateButton", - "Form", - "FormComponent", - "Gallery", - "HTML", - "Image", - "IOComponent", - "Interpretation", - "JSON", - "Json", - "Label", - "LinePlot", - "LoginButton", - "LogoutButton", - "Markdown", - "Textbox", - "Dropdown", - "Model3D", - "File", - "HighlightedText", - "AnnotatedImage", - "CheckboxGroup", - "Timeseries", - "Text", - "Highlightedtext", - "Annotatedimage", - "Highlight", - "Checkboxgroup", - "TimeSeries", - "Number", - "Plot", - "Radio", - "Row", - "ScatterPlot", - "Slider", - "State", - "Variable", - "StatusTracker", - "UploadButton", - "Video", -] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/chatbot.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/chatbot.py deleted file mode 100644 index 605e040dfaac86f4b69670ed1f4d55132cbb3ac9..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/chatbot.py +++ /dev/null @@ -1,253 +0,0 @@ -"""gr.Chatbot() component.""" - -from __future__ import annotations - -import inspect -from pathlib import Path -from typing import Callable, Literal - -from gradio_client import utils as client_utils -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import JSONSerializable - -from gradio import utils -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import ( - Changeable, - EventListenerMethod, - Selectable, -) - -set_documentation_group("component") - - -@document() -class Chatbot(Changeable, Selectable, IOComponent, JSONSerializable): - """ - Displays a chatbot output showing both user submitted messages and responses. Supports a subset of Markdown including bold, italics, code, tables. Also supports audio/video/image files, which are displayed in the Chatbot, and other kinds of files which are displayed as links. - Preprocessing: passes the messages in the Chatbot as a {List[List[str | None | Tuple]]}, i.e. a list of lists. The inner list has 2 elements: the user message and the response message. See `Postprocessing` for the format of these messages. - Postprocessing: expects function to return a {List[List[str | None | Tuple]]}, i.e. a list of lists. The inner list should have 2 elements: the user message and the response message. The individual messages can be (1) strings in valid Markdown, (2) tuples if sending files: (a filepath or URL to a file, [optional string alt text]) -- if the file is image/video/audio, it is displayed in the Chatbot, or (3) None, in which case the message is not displayed. - - Demos: chatbot_simple, chatbot_multimodal - Guides: creating-a-chatbot - """ - - def __init__( - self, - value: list[list[str | tuple[str] | tuple[str | Path, str] | None]] - | Callable - | None = None, - color_map: dict[str, str] | None = None, - *, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - height: int | None = None, - latex_delimiters: list[dict[str, str | bool]] | None = None, - rtl: bool = False, - show_share_button: bool | None = None, - show_copy_button: bool = False, - **kwargs, - ): - """ - Parameters: - value: Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component. - color_map: This parameter is deprecated. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - height: height of the component in pixels. - latex_delimiters: A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html). - rtl: If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right. - show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise. - show_copy_button: If True, will show a copy button for each chatbot message. - """ - if color_map is not None: - warn_deprecation("The 'color_map' parameter has been deprecated.") - self.select: EventListenerMethod - """ - Event listener for when the user selects message from Chatbot. - Uses event data gradio.SelectData to carry `value` referring to text of selected message, and `index` tuple to refer to [message, participant] index. - See EventData documentation on how to use this event data. - """ - self.height = height - self.rtl = rtl - if latex_delimiters is None: - latex_delimiters = [{"left": "$$", "right": "$$", "display": True}] - self.latex_delimiters = latex_delimiters - self.show_share_button = ( - (utils.get_space() is not None) - if show_share_button is None - else show_share_button - ) - self.show_copy_button = show_copy_button - - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "value": self.value, - "latex_delimiters": self.latex_delimiters, - "selectable": self.selectable, - "height": self.height, - "show_share_button": self.show_share_button, - "rtl": self.rtl, - "show_copy_button": self.show_copy_button, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: list[list[str | tuple[str] | tuple[str, str] | None]] - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - height: int | None = None, - rtl: bool | None = None, - show_share_button: bool | None = None, - show_copy_button: bool | None = None, - ): - updated_config = { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "height": height, - "show_share_button": show_share_button, - "rtl": rtl, - "show_copy_button": show_copy_button, - "__type__": "update", - } - return updated_config - - def _preprocess_chat_messages( - self, chat_message: str | dict | None - ) -> str | tuple[str] | tuple[str, str] | None: - if chat_message is None: - return None - elif isinstance(chat_message, dict): - if chat_message["alt_text"] is not None: - return (chat_message["name"], chat_message["alt_text"]) - else: - return (chat_message["name"],) - else: # string - return chat_message - - def preprocess( - self, - y: list[list[str | dict | None] | tuple[str | dict | None, str | dict | None]], - ) -> list[list[str | tuple[str] | tuple[str, str] | None]]: - if y is None: - return y - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - processed_messages.append( - [ - self._preprocess_chat_messages(message_pair[0]), - self._preprocess_chat_messages(message_pair[1]), - ] - ) - return processed_messages - - def _postprocess_chat_messages( - self, chat_message: str | tuple | list | None - ) -> str | dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - file_uri = str(chat_message[0]) - if utils.validate_url(file_uri): - filepath = file_uri - else: - filepath = self.make_temp_copy_if_needed(file_uri) - - mime_type = client_utils.get_mimetype(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - chat_message = inspect.cleandoc(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - - def postprocess( - self, - y: list[list[str | tuple[str] | tuple[str, str] | None] | tuple], - ) -> list[list[str | dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string or pathlib.Path filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0]), - self._postprocess_chat_messages(message_pair[1]), - ] - ) - return processed_messages - - def style(self, height: int | None = None, **kwargs): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if height is not None: - self.height = height - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/utils.py deleted file mode 100644 index 9b5f5a50eb6773c4085f8572a45b3fa351367565..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/utils.py +++ /dev/null @@ -1,755 +0,0 @@ -import enum -import json -import os -import re -import typing as t -from collections import abc -from collections import deque -from random import choice -from random import randrange -from threading import Lock -from types import CodeType -from urllib.parse import quote_from_bytes - -import markupsafe - -if t.TYPE_CHECKING: - import typing_extensions as te - -F = t.TypeVar("F", bound=t.Callable[..., t.Any]) - -# special singleton representing missing values for the runtime -missing: t.Any = type("MissingType", (), {"__repr__": lambda x: "missing"})() - -internal_code: t.MutableSet[CodeType] = set() - -concat = "".join - - -def pass_context(f: F) -> F: - """Pass the :class:`~jinja2.runtime.Context` as the first argument - to the decorated function when called while rendering a template. - - Can be used on functions, filters, and tests. - - If only ``Context.eval_context`` is needed, use - :func:`pass_eval_context`. If only ``Context.environment`` is - needed, use :func:`pass_environment`. - - .. versionadded:: 3.0.0 - Replaces ``contextfunction`` and ``contextfilter``. - """ - f.jinja_pass_arg = _PassArg.context # type: ignore - return f - - -def pass_eval_context(f: F) -> F: - """Pass the :class:`~jinja2.nodes.EvalContext` as the first argument - to the decorated function when called while rendering a template. - See :ref:`eval-context`. - - Can be used on functions, filters, and tests. - - If only ``EvalContext.environment`` is needed, use - :func:`pass_environment`. - - .. versionadded:: 3.0.0 - Replaces ``evalcontextfunction`` and ``evalcontextfilter``. - """ - f.jinja_pass_arg = _PassArg.eval_context # type: ignore - return f - - -def pass_environment(f: F) -> F: - """Pass the :class:`~jinja2.Environment` as the first argument to - the decorated function when called while rendering a template. - - Can be used on functions, filters, and tests. - - .. versionadded:: 3.0.0 - Replaces ``environmentfunction`` and ``environmentfilter``. - """ - f.jinja_pass_arg = _PassArg.environment # type: ignore - return f - - -class _PassArg(enum.Enum): - context = enum.auto() - eval_context = enum.auto() - environment = enum.auto() - - @classmethod - def from_obj(cls, obj: F) -> t.Optional["_PassArg"]: - if hasattr(obj, "jinja_pass_arg"): - return obj.jinja_pass_arg # type: ignore - - return None - - -def internalcode(f: F) -> F: - """Marks the function as internally used""" - internal_code.add(f.__code__) - return f - - -def is_undefined(obj: t.Any) -> bool: - """Check if the object passed is undefined. This does nothing more than - performing an instance check against :class:`Undefined` but looks nicer. - This can be used for custom filters or tests that want to react to - undefined variables. For example a custom default filter can look like - this:: - - def default(var, default=''): - if is_undefined(var): - return default - return var - """ - from .runtime import Undefined - - return isinstance(obj, Undefined) - - -def consume(iterable: t.Iterable[t.Any]) -> None: - """Consumes an iterable without doing anything with it.""" - for _ in iterable: - pass - - -def clear_caches() -> None: - """Jinja keeps internal caches for environments and lexers. These are - used so that Jinja doesn't have to recreate environments and lexers all - the time. Normally you don't have to care about that but if you are - measuring memory consumption you may want to clean the caches. - """ - from .environment import get_spontaneous_environment - from .lexer import _lexer_cache - - get_spontaneous_environment.cache_clear() - _lexer_cache.clear() - - -def import_string(import_name: str, silent: bool = False) -> t.Any: - """Imports an object based on a string. This is useful if you want to - use import paths as endpoints or something similar. An import path can - be specified either in dotted notation (``xml.sax.saxutils.escape``) - or with a colon as object delimiter (``xml.sax.saxutils:escape``). - - If the `silent` is True the return value will be `None` if the import - fails. - - :return: imported object - """ - try: - if ":" in import_name: - module, obj = import_name.split(":", 1) - elif "." in import_name: - module, _, obj = import_name.rpartition(".") - else: - return __import__(import_name) - return getattr(__import__(module, None, None, [obj]), obj) - except (ImportError, AttributeError): - if not silent: - raise - - -def open_if_exists(filename: str, mode: str = "rb") -> t.Optional[t.IO]: - """Returns a file descriptor for the filename if that file exists, - otherwise ``None``. - """ - if not os.path.isfile(filename): - return None - - return open(filename, mode) - - -def object_type_repr(obj: t.Any) -> str: - """Returns the name of the object's type. For some recognized - singletons the name of the object is returned instead. (For - example for `None` and `Ellipsis`). - """ - if obj is None: - return "None" - elif obj is Ellipsis: - return "Ellipsis" - - cls = type(obj) - - if cls.__module__ == "builtins": - return f"{cls.__name__} object" - - return f"{cls.__module__}.{cls.__name__} object" - - -def pformat(obj: t.Any) -> str: - """Format an object using :func:`pprint.pformat`.""" - from pprint import pformat # type: ignore - - return pformat(obj) - - -_http_re = re.compile( - r""" - ^ - ( - (https?://|www\.) # scheme or www - (([\w%-]+\.)+)? # subdomain - ( - [a-z]{2,63} # basic tld - | - xn--[\w%]{2,59} # idna tld - ) - | - ([\w%-]{2,63}\.)+ # basic domain - (com|net|int|edu|gov|org|info|mil) # basic tld - | - (https?://) # scheme - ( - (([\d]{1,3})(\.[\d]{1,3}){3}) # IPv4 - | - (\[([\da-f]{0,4}:){2}([\da-f]{0,4}:?){1,6}]) # IPv6 - ) - ) - (?::[\d]{1,5})? # port - (?:[/?#]\S*)? # path, query, and fragment - $ - """, - re.IGNORECASE | re.VERBOSE, -) -_email_re = re.compile(r"^\S+@\w[\w.-]*\.\w+$") - - -def urlize( - text: str, - trim_url_limit: t.Optional[int] = None, - rel: t.Optional[str] = None, - target: t.Optional[str] = None, - extra_schemes: t.Optional[t.Iterable[str]] = None, -) -> str: - """Convert URLs in text into clickable links. - - This may not recognize links in some situations. Usually, a more - comprehensive formatter, such as a Markdown library, is a better - choice. - - Works on ``http://``, ``https://``, ``www.``, ``mailto:``, and email - addresses. Links with trailing punctuation (periods, commas, closing - parentheses) and leading punctuation (opening parentheses) are - recognized excluding the punctuation. Email addresses that include - header fields are not recognized (for example, - ``mailto:address@example.com?cc=copy@example.com``). - - :param text: Original text containing URLs to link. - :param trim_url_limit: Shorten displayed URL values to this length. - :param target: Add the ``target`` attribute to links. - :param rel: Add the ``rel`` attribute to links. - :param extra_schemes: Recognize URLs that start with these schemes - in addition to the default behavior. - - .. versionchanged:: 3.0 - The ``extra_schemes`` parameter was added. - - .. versionchanged:: 3.0 - Generate ``https://`` links for URLs without a scheme. - - .. versionchanged:: 3.0 - The parsing rules were updated. Recognize email addresses with - or without the ``mailto:`` scheme. Validate IP addresses. Ignore - parentheses and brackets in more cases. - """ - if trim_url_limit is not None: - - def trim_url(x: str) -> str: - if len(x) > trim_url_limit: # type: ignore - return f"{x[:trim_url_limit]}..." - - return x - - else: - - def trim_url(x: str) -> str: - return x - - words = re.split(r"(\s+)", str(markupsafe.escape(text))) - rel_attr = f' rel="{markupsafe.escape(rel)}"' if rel else "" - target_attr = f' target="{markupsafe.escape(target)}"' if target else "" - - for i, word in enumerate(words): - head, middle, tail = "", word, "" - match = re.match(r"^([(<]|<)+", middle) - - if match: - head = match.group() - middle = middle[match.end() :] - - # Unlike lead, which is anchored to the start of the string, - # need to check that the string ends with any of the characters - # before trying to match all of them, to avoid backtracking. - if middle.endswith((")", ">", ".", ",", "\n", ">")): - match = re.search(r"([)>.,\n]|>)+$", middle) - - if match: - tail = match.group() - middle = middle[: match.start()] - - # Prefer balancing parentheses in URLs instead of ignoring a - # trailing character. - for start_char, end_char in ("(", ")"), ("<", ">"), ("<", ">"): - start_count = middle.count(start_char) - - if start_count <= middle.count(end_char): - # Balanced, or lighter on the left - continue - - # Move as many as possible from the tail to balance - for _ in range(min(start_count, tail.count(end_char))): - end_index = tail.index(end_char) + len(end_char) - # Move anything in the tail before the end char too - middle += tail[:end_index] - tail = tail[end_index:] - - if _http_re.match(middle): - if middle.startswith("https://") or middle.startswith("http://"): - middle = ( - f'{trim_url(middle)}' - ) - else: - middle = ( - f'' - f"{trim_url(middle)}" - ) - - elif middle.startswith("mailto:") and _email_re.match(middle[7:]): - middle = f'{middle[7:]}' - - elif ( - "@" in middle - and not middle.startswith("www.") - and ":" not in middle - and _email_re.match(middle) - ): - middle = f'{middle}' - - elif extra_schemes is not None: - for scheme in extra_schemes: - if middle != scheme and middle.startswith(scheme): - middle = f'{middle}' - - words[i] = f"{head}{middle}{tail}" - - return "".join(words) - - -def generate_lorem_ipsum( - n: int = 5, html: bool = True, min: int = 20, max: int = 100 -) -> str: - """Generate some lorem ipsum for the template.""" - from .constants import LOREM_IPSUM_WORDS - - words = LOREM_IPSUM_WORDS.split() - result = [] - - for _ in range(n): - next_capitalized = True - last_comma = last_fullstop = 0 - word = None - last = None - p = [] - - # each paragraph contains out of 20 to 100 words. - for idx, _ in enumerate(range(randrange(min, max))): - while True: - word = choice(words) - if word != last: - last = word - break - if next_capitalized: - word = word.capitalize() - next_capitalized = False - # add commas - if idx - randrange(3, 8) > last_comma: - last_comma = idx - last_fullstop += 2 - word += "," - # add end of sentences - if idx - randrange(10, 20) > last_fullstop: - last_comma = last_fullstop = idx - word += "." - next_capitalized = True - p.append(word) - - # ensure that the paragraph ends with a dot. - p_str = " ".join(p) - - if p_str.endswith(","): - p_str = p_str[:-1] + "." - elif not p_str.endswith("."): - p_str += "." - - result.append(p_str) - - if not html: - return "\n\n".join(result) - return markupsafe.Markup( - "\n".join(f"

    {markupsafe.escape(x)}

    " for x in result) - ) - - -def url_quote(obj: t.Any, charset: str = "utf-8", for_qs: bool = False) -> str: - """Quote a string for use in a URL using the given charset. - - :param obj: String or bytes to quote. Other types are converted to - string then encoded to bytes using the given charset. - :param charset: Encode text to bytes using this charset. - :param for_qs: Quote "/" and use "+" for spaces. - """ - if not isinstance(obj, bytes): - if not isinstance(obj, str): - obj = str(obj) - - obj = obj.encode(charset) - - safe = b"" if for_qs else b"/" - rv = quote_from_bytes(obj, safe) - - if for_qs: - rv = rv.replace("%20", "+") - - return rv - - -@abc.MutableMapping.register -class LRUCache: - """A simple LRU Cache implementation.""" - - # this is fast for small capacities (something below 1000) but doesn't - # scale. But as long as it's only used as storage for templates this - # won't do any harm. - - def __init__(self, capacity: int) -> None: - self.capacity = capacity - self._mapping: t.Dict[t.Any, t.Any] = {} - self._queue: "te.Deque[t.Any]" = deque() - self._postinit() - - def _postinit(self) -> None: - # alias all queue methods for faster lookup - self._popleft = self._queue.popleft - self._pop = self._queue.pop - self._remove = self._queue.remove - self._wlock = Lock() - self._append = self._queue.append - - def __getstate__(self) -> t.Mapping[str, t.Any]: - return { - "capacity": self.capacity, - "_mapping": self._mapping, - "_queue": self._queue, - } - - def __setstate__(self, d: t.Mapping[str, t.Any]) -> None: - self.__dict__.update(d) - self._postinit() - - def __getnewargs__(self) -> t.Tuple: - return (self.capacity,) - - def copy(self) -> "LRUCache": - """Return a shallow copy of the instance.""" - rv = self.__class__(self.capacity) - rv._mapping.update(self._mapping) - rv._queue.extend(self._queue) - return rv - - def get(self, key: t.Any, default: t.Any = None) -> t.Any: - """Return an item from the cache dict or `default`""" - try: - return self[key] - except KeyError: - return default - - def setdefault(self, key: t.Any, default: t.Any = None) -> t.Any: - """Set `default` if the key is not in the cache otherwise - leave unchanged. Return the value of this key. - """ - try: - return self[key] - except KeyError: - self[key] = default - return default - - def clear(self) -> None: - """Clear the cache.""" - with self._wlock: - self._mapping.clear() - self._queue.clear() - - def __contains__(self, key: t.Any) -> bool: - """Check if a key exists in this cache.""" - return key in self._mapping - - def __len__(self) -> int: - """Return the current size of the cache.""" - return len(self._mapping) - - def __repr__(self) -> str: - return f"<{type(self).__name__} {self._mapping!r}>" - - def __getitem__(self, key: t.Any) -> t.Any: - """Get an item from the cache. Moves the item up so that it has the - highest priority then. - - Raise a `KeyError` if it does not exist. - """ - with self._wlock: - rv = self._mapping[key] - - if self._queue[-1] != key: - try: - self._remove(key) - except ValueError: - # if something removed the key from the container - # when we read, ignore the ValueError that we would - # get otherwise. - pass - - self._append(key) - - return rv - - def __setitem__(self, key: t.Any, value: t.Any) -> None: - """Sets the value for an item. Moves the item up so that it - has the highest priority then. - """ - with self._wlock: - if key in self._mapping: - self._remove(key) - elif len(self._mapping) == self.capacity: - del self._mapping[self._popleft()] - - self._append(key) - self._mapping[key] = value - - def __delitem__(self, key: t.Any) -> None: - """Remove an item from the cache dict. - Raise a `KeyError` if it does not exist. - """ - with self._wlock: - del self._mapping[key] - - try: - self._remove(key) - except ValueError: - pass - - def items(self) -> t.Iterable[t.Tuple[t.Any, t.Any]]: - """Return a list of items.""" - result = [(key, self._mapping[key]) for key in list(self._queue)] - result.reverse() - return result - - def values(self) -> t.Iterable[t.Any]: - """Return a list of all values.""" - return [x[1] for x in self.items()] - - def keys(self) -> t.Iterable[t.Any]: - """Return a list of all keys ordered by most recent usage.""" - return list(self) - - def __iter__(self) -> t.Iterator[t.Any]: - return reversed(tuple(self._queue)) - - def __reversed__(self) -> t.Iterator[t.Any]: - """Iterate over the keys in the cache dict, oldest items - coming first. - """ - return iter(tuple(self._queue)) - - __copy__ = copy - - -def select_autoescape( - enabled_extensions: t.Collection[str] = ("html", "htm", "xml"), - disabled_extensions: t.Collection[str] = (), - default_for_string: bool = True, - default: bool = False, -) -> t.Callable[[t.Optional[str]], bool]: - """Intelligently sets the initial value of autoescaping based on the - filename of the template. This is the recommended way to configure - autoescaping if you do not want to write a custom function yourself. - - If you want to enable it for all templates created from strings or - for all templates with `.html` and `.xml` extensions:: - - from jinja2 import Environment, select_autoescape - env = Environment(autoescape=select_autoescape( - enabled_extensions=('html', 'xml'), - default_for_string=True, - )) - - Example configuration to turn it on at all times except if the template - ends with `.txt`:: - - from jinja2 import Environment, select_autoescape - env = Environment(autoescape=select_autoescape( - disabled_extensions=('txt',), - default_for_string=True, - default=True, - )) - - The `enabled_extensions` is an iterable of all the extensions that - autoescaping should be enabled for. Likewise `disabled_extensions` is - a list of all templates it should be disabled for. If a template is - loaded from a string then the default from `default_for_string` is used. - If nothing matches then the initial value of autoescaping is set to the - value of `default`. - - For security reasons this function operates case insensitive. - - .. versionadded:: 2.9 - """ - enabled_patterns = tuple(f".{x.lstrip('.').lower()}" for x in enabled_extensions) - disabled_patterns = tuple(f".{x.lstrip('.').lower()}" for x in disabled_extensions) - - def autoescape(template_name: t.Optional[str]) -> bool: - if template_name is None: - return default_for_string - template_name = template_name.lower() - if template_name.endswith(enabled_patterns): - return True - if template_name.endswith(disabled_patterns): - return False - return default - - return autoescape - - -def htmlsafe_json_dumps( - obj: t.Any, dumps: t.Optional[t.Callable[..., str]] = None, **kwargs: t.Any -) -> markupsafe.Markup: - """Serialize an object to a string of JSON with :func:`json.dumps`, - then replace HTML-unsafe characters with Unicode escapes and mark - the result safe with :class:`~markupsafe.Markup`. - - This is available in templates as the ``|tojson`` filter. - - The following characters are escaped: ``<``, ``>``, ``&``, ``'``. - - The returned string is safe to render in HTML documents and - `` - - - - - - - - - - - - -
    - - - diff --git a/spaces/hyoo/translate/tokenization_small100.py b/spaces/hyoo/translate/tokenization_small100.py deleted file mode 100644 index e3b71e1f12c1c98bf545b60025c40ccf9ff76955..0000000000000000000000000000000000000000 --- a/spaces/hyoo/translate/tokenization_small100.py +++ /dev/null @@ -1,364 +0,0 @@ -# Copyright (c) 2022 Idiap Research Institute, http://www.idiap.ch/ -# Written by Alireza Mohammadshahi -# This is a modified version of https://github.com/huggingface/transformers/blob/main/src/transformers/models/m2m_100/tokenization_m2m_100.py -# which owns by Fariseq Authors and The HuggingFace Inc. team. -# -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for SMALL100.""" -import json -import os -from pathlib import Path -from shutil import copyfile -from typing import Any, Dict, List, Optional, Tuple, Union - -import sentencepiece - -from transformers.tokenization_utils import BatchEncoding, PreTrainedTokenizer -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - -SPIECE_UNDERLINE = "▁" - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "spm_file": "sentencepiece.bpe.model", - "tokenizer_config_file": "tokenizer_config.json", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/vocab.json", - }, - "spm_file": { - "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/sentencepiece.bpe.model", - }, - "tokenizer_config_file": { - "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/tokenizer_config.json", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "alirezamsh/small100": 1024, -} - -# fmt: off -FAIRSEQ_LANGUAGE_CODES = { - "m2m100": ["af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"] -} -# fmt: on - - -class SMALL100Tokenizer(PreTrainedTokenizer): - """ - Construct an SMALL100 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - Args: - vocab_file (`str`): - Path to the vocabulary file. - spm_file (`str`): - Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that - contains the vocabulary. - tgt_lang (`str`, *optional*): - A string representing the target language. - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - language_codes (`str`, *optional*): - What language codes to use. Should be `"m2m100"`. - sp_model_kwargs (`dict`, *optional*): - Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for - SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, - to set: - - `enable_sampling`: Enable subword regularization. - - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - - `nbest_size = {0,1}`: No sampling is performed. - - `nbest_size > 1`: samples from the nbest_size results. - - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) - using forward-filtering-and-backward-sampling algorithm. - - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for - BPE-dropout. - Examples: - ```python - >>> from tokenization_small100 import SMALL100Tokenizer - >>> tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="ro") - >>> src_text = " UN Chief Says There Is No Military Solution in Syria" - >>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" - >>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") - >>> model(**model_inputs) # should work - ```""" - - vocab_files_names = VOCAB_FILES_NAMES - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - model_input_names = ["input_ids", "attention_mask"] - - prefix_tokens: List[int] = [] - suffix_tokens: List[int] = [] - - def __init__( - self, - vocab_file, - spm_file, - tgt_lang=None, - bos_token="", - eos_token="", - sep_token="", - pad_token="", - unk_token="", - language_codes="m2m100", - sp_model_kwargs: Optional[Dict[str, Any]] = None, - num_madeup_words=8, - **kwargs, - ) -> None: - self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs - - self.language_codes = language_codes - fairseq_language_code = FAIRSEQ_LANGUAGE_CODES[language_codes] - self.lang_code_to_token = {lang_code: f"__{lang_code}__" for lang_code in fairseq_language_code} - - kwargs["additional_special_tokens"] = kwargs.get("additional_special_tokens", []) - kwargs["additional_special_tokens"] += [ - self.get_lang_token(lang_code) - for lang_code in fairseq_language_code - if self.get_lang_token(lang_code) not in kwargs["additional_special_tokens"] - ] - - super().__init__( - tgt_lang=tgt_lang, - bos_token=bos_token, - eos_token=eos_token, - sep_token=sep_token, - unk_token=unk_token, - pad_token=pad_token, - language_codes=language_codes, - sp_model_kwargs=self.sp_model_kwargs, - num_madeup_words=num_madeup_words, - **kwargs, - ) - - self.vocab_file = vocab_file - self.encoder = load_json(vocab_file) - self.decoder = {v: k for k, v in self.encoder.items()} - self.spm_file = spm_file - self.sp_model = load_spm(spm_file, self.sp_model_kwargs) - - self.encoder_size = len(self.encoder) - - self.lang_token_to_id = { - self.get_lang_token(lang_code): self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code) - } - self.lang_code_to_id = {lang_code: self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)} - self.id_to_lang_token = {v: k for k, v in self.lang_token_to_id.items()} - - self._tgt_lang = tgt_lang if tgt_lang is not None else "en" - self.cur_lang_id = self.get_lang_id(self._tgt_lang) - self.set_lang_special_tokens(self._tgt_lang) - - self.num_madeup_words = num_madeup_words - - @property - def vocab_size(self) -> int: - return len(self.encoder) + len(self.lang_token_to_id) + self.num_madeup_words - - @property - def tgt_lang(self) -> str: - return self._tgt_lang - - @tgt_lang.setter - def tgt_lang(self, new_tgt_lang: str) -> None: - self._tgt_lang = new_tgt_lang - self.set_lang_special_tokens(self._tgt_lang) - - def _tokenize(self, text: str) -> List[str]: - return self.sp_model.encode(text, out_type=str) - - def _convert_token_to_id(self, token): - if token in self.lang_token_to_id: - return self.lang_token_to_id[token] - return self.encoder.get(token, self.encoder[self.unk_token]) - - def _convert_id_to_token(self, index: int) -> str: - """Converts an index (integer) in a token (str) using the decoder.""" - if index in self.id_to_lang_token: - return self.id_to_lang_token[index] - return self.decoder.get(index, self.unk_token) - - def convert_tokens_to_string(self, tokens: List[str]) -> str: - """Converts a sequence of tokens (strings for sub-words) in a single string.""" - return self.sp_model.decode(tokens) - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - prefix_ones = [1] * len(self.prefix_tokens) - suffix_ones = [1] * len(self.suffix_tokens) - if token_ids_1 is None: - return prefix_ones + ([0] * len(token_ids_0)) + suffix_ones - return prefix_ones + ([0] * len(token_ids_0)) + ([0] * len(token_ids_1)) + suffix_ones - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. An MBART sequence has the following format, where `X` represents the sequence: - - `input_ids` (for encoder) `X [eos, src_lang_code]` - - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]` - BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a - separator. - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - if token_ids_1 is None: - if self.prefix_tokens is None: - return token_ids_0 + self.suffix_tokens - else: - return self.prefix_tokens + token_ids_0 + self.suffix_tokens - # We don't expect to process pairs, but leave the pair logic for API consistency - if self.prefix_tokens is None: - return token_ids_0 + token_ids_1 + self.suffix_tokens - else: - return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens - - def get_vocab(self) -> Dict: - vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} - vocab.update(self.added_tokens_encoder) - return vocab - - def __getstate__(self) -> Dict: - state = self.__dict__.copy() - state["sp_model"] = None - return state - - def __setstate__(self, d: Dict) -> None: - self.__dict__ = d - - # for backward compatibility - if not hasattr(self, "sp_model_kwargs"): - self.sp_model_kwargs = {} - - self.sp_model = load_spm(self.spm_file, self.sp_model_kwargs) - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - save_dir = Path(save_directory) - if not save_dir.is_dir(): - raise OSError(f"{save_directory} should be a directory") - vocab_save_path = save_dir / ( - (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["vocab_file"] - ) - spm_save_path = save_dir / ( - (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["spm_file"] - ) - - save_json(self.encoder, vocab_save_path) - - if os.path.abspath(self.spm_file) != os.path.abspath(spm_save_path) and os.path.isfile(self.spm_file): - copyfile(self.spm_file, spm_save_path) - elif not os.path.isfile(self.spm_file): - with open(spm_save_path, "wb") as fi: - content_spiece_model = self.sp_model.serialized_model_proto() - fi.write(content_spiece_model) - - return (str(vocab_save_path), str(spm_save_path)) - - def prepare_seq2seq_batch( - self, - src_texts: List[str], - tgt_texts: Optional[List[str]] = None, - tgt_lang: str = "ro", - **kwargs, - ) -> BatchEncoding: - self.tgt_lang = tgt_lang - self.set_lang_special_tokens(self.tgt_lang) - return super().prepare_seq2seq_batch(src_texts, tgt_texts, **kwargs) - - def _build_translation_inputs(self, raw_inputs, tgt_lang: Optional[str], **extra_kwargs): - """Used by translation pipeline, to prepare inputs for the generate function""" - if tgt_lang is None: - raise ValueError("Translation requires a `tgt_lang` for this model") - self.tgt_lang = tgt_lang - inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs) - return inputs - - def _switch_to_input_mode(self): - self.set_lang_special_tokens(self.tgt_lang) - - def _switch_to_target_mode(self): - self.prefix_tokens = None - self.suffix_tokens = [self.eos_token_id] - - def set_lang_special_tokens(self, src_lang: str) -> None: - """Reset the special tokens to the tgt lang setting. No prefix and suffix=[eos, tgt_lang_code].""" - lang_token = self.get_lang_token(src_lang) - self.cur_lang_id = self.lang_token_to_id[lang_token] - self.prefix_tokens = [self.cur_lang_id] - self.suffix_tokens = [self.eos_token_id] - - def get_lang_token(self, lang: str) -> str: - return self.lang_code_to_token[lang] - - def get_lang_id(self, lang: str) -> int: - lang_token = self.get_lang_token(lang) - return self.lang_token_to_id[lang_token] - - -def load_spm(path: str, sp_model_kwargs: Dict[str, Any]) -> sentencepiece.SentencePieceProcessor: - spm = sentencepiece.SentencePieceProcessor(**sp_model_kwargs) - spm.Load(str(path)) - return spm - - -def load_json(path: str) -> Union[Dict, List]: - with open(path, "r") as f: - return json.load(f) - - -def save_json(data, path: str) -> None: - with open(path, "w") as f: - json.dump(data, f, indent=2) diff --git a/spaces/imseldrith/FaceSwap/roop/globals.py b/spaces/imseldrith/FaceSwap/roop/globals.py deleted file mode 100644 index 3eca8d0d024db967cc6d7e7149f68f65f84d7072..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/FaceSwap/roop/globals.py +++ /dev/null @@ -1,22 +0,0 @@ -from typing import List, Optional - -source_path: Optional[str] = None -target_path: Optional[str] = None -output_path: Optional[str] = None -headless: Optional[bool] = None -frame_processors: List[str] = [] -keep_fps: Optional[bool] = None -keep_frames: Optional[bool] = None -skip_audio: Optional[bool] = None -many_faces: Optional[bool] = None -reference_face_position: Optional[int] = None -reference_frame_number: Optional[int] = None -similar_face_distance: Optional[float] = None -temp_frame_format: Optional[str] = None -temp_frame_quality: Optional[int] = None -output_video_encoder: Optional[str] = None -output_video_quality: Optional[int] = None -max_memory: Optional[int] = None -execution_providers: List[str] = [] -execution_threads: Optional[int] = None -log_level: str = 'error' diff --git a/spaces/inamXcontru/PoeticTTS/Bus Simulator 2009 English Patch.rar Checked.md b/spaces/inamXcontru/PoeticTTS/Bus Simulator 2009 English Patch.rar Checked.md deleted file mode 100644 index 97ef8d2c47fcd42347dc92bc8ffd015cc090cdf3..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bus Simulator 2009 English Patch.rar Checked.md +++ /dev/null @@ -1,6 +0,0 @@ -

    bus simulator 2009 english patch.rar checked


    Download ✒ ✒ ✒ https://gohhs.com/2uz5Es



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Autodesk Structural Detailing 2014 Serial Number.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Autodesk Structural Detailing 2014 Serial Number.md deleted file mode 100644 index 016344433cc1491d36c124d56532855f4d3a4616..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Autodesk Structural Detailing 2014 Serial Number.md +++ /dev/null @@ -1,6 +0,0 @@ -

    autodesk structural detailing 2014 serial number


    DOWNLOADhttps://urlin.us/2uEyf8



    - -April 10, 2016 - AutoCAD Revit Structure Suite 2014. 256F1. AutoCAD Structural Detailing 2014. 587F1. Autodesk 3ds Max 2014. 128F1. // Supported products //. Autodesk Revit Structure Suite 2014. // Supported Products //. AutoCAD Plant 3D 2014. 1B27. Autodesk Vault Workgroup 2013.6F3. Autodesk Vault Workgroup 2013.6F3. AutoCAD Plant 3D 2013. 9F1. // Supported products //. Autodesk Vault Workgroup 2013.9F3. Autodesk Vault Workgroup 2013.9F3. AutoCAD LT 2014.9F1. AutoCAD LT 2014.9F1. Autodesk Vault Workgroup 2013.9F3. // Supported products //. AutoCAD LT 2013.9F1. AutoCAD LT 2013.9F1. Autodesk Vault Workgroup 2013. 9F3. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Carmes-gif-extractor-download LINK Mega.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Carmes-gif-extractor-download LINK Mega.md deleted file mode 100644 index 90852622d5782bc4f837ab4c4c00691b9b57feec..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Carmes-gif-extractor-download LINK Mega.md +++ /dev/null @@ -1,6 +0,0 @@ -

    carmes-gif-extractor-download mega


    Download ☆☆☆☆☆ https://urlin.us/2uEwKN



    - -10 baise of ass chick shaved carmen savoy rv omegle scene horny 830 milf ... on hot cock shy of part. mega out lick worked deep flip ass girl lv amai again her cute ... at deep culazo teen girls to maker awesom adams gangbang. chunky facials ... latina. brunette two cum hazel download and coolbudy extended with natural. 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Femeia Sarpe Film Indian Subtitrat In Romana.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Femeia Sarpe Film Indian Subtitrat In Romana.md deleted file mode 100644 index f3166e34482d0260091b9c901b5316b14476414c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Femeia Sarpe Film Indian Subtitrat In Romana.md +++ /dev/null @@ -1,6 +0,0 @@ -

    femeia sarpe film indian subtitrat in romana


    Download ••• https://urlin.us/2uEyzW



    -
    -부천오피 ce nume sa-i dau acestei iubiri ep 247 retain naagin femeia sarpe ... ep 133 question femeia sarpe film indian online subtitrat bid prizoniera dragostei ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ford ECAT Torrent Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ford ECAT Torrent Download.md deleted file mode 100644 index 29f540794711b09803b8b8548fea864b767b4c44..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ford ECAT Torrent Download.md +++ /dev/null @@ -1,148 +0,0 @@ -
    -

    What is Ford ECAT and How to Download it from Torrent Sites

    - -

    Ford ECAT is an electronic parts catalog that contains information about all Ford vehicles produced for the European market. It is a useful tool for anyone who owns, repairs, or sells Ford cars, jeeps, or light commercial vehicles. With Ford ECAT, you can easily identify and order parts and services for your Ford vehicle, as well as access advanced search and content functions.

    -

    ford ECAT torrent download


    Downloadhttps://urlin.us/2uExRS



    - -

    However, Ford ECAT is not a free software. You need to pay a subscription fee or a license fee to use it. If you want to save money and get Ford ECAT for free, you can download it from torrent sites. Torrent sites are online platforms that allow users to share files with each other using a peer-to-peer network. By downloading a torrent file, you can get the full version of Ford ECAT without paying any fees or subscriptions.

    - -

    How to Find Ford ECAT Torrent Sites

    - -

    To download Ford ECAT from torrent sites, you need to find a reputable torrent site that offers it. You can use a search engine or a specialized torrent tracker to find one. Some examples of popular torrent sites are MHH AUTO, Bitbucket, and tuganetwork.

    - -

    When you find a torrent site that offers Ford ECAT, you need to check the following things:

    - -
      -
    • The version of Ford ECAT. You should choose the latest version available, such as Ford ECAT 10.2015 or Ford ECAT 06.2014.
    • -
    • The size of the file. You should choose a file that is not too large or too small. A typical file size for Ford ECAT is around 7 GB.
    • -
    • The number of seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. You should choose a file that has more seeders than leechers, as this will increase your download speed and quality.
    • -
    • The comments and ratings. You should read the comments and ratings of other users who have downloaded the file. This will help you avoid fake or corrupted files, as well as get tips and feedback on how to install and activate Ford ECAT.
    • -
    - -

    How to Download Ford ECAT Torrent File

    - -

    After you find a suitable torrent site and file for Ford ECAT, you need to download the torrent file. The torrent file is usually a small file that contains information about the larger file you want to download. Here are the steps you need to follow:

    - -
      -
    1. Download the torrent file of Ford ECAT from the torrent site. You can do this by clicking on the download button or link on the site.
    2. -
    3. Open the torrent file with a torrent client. A torrent client is a software that enables you to download files from other users who have them on their computers. Some examples of popular torrent clients are uTorrent, BitTorrent, and qBittorrent.
    4. -
    5. Start downloading Ford ECAT from the torrent client. The download speed will depend on the number of seeders and leechers available, as well as your internet connection speed.
    6. -
    7. Wait until the download is finished. The download time will vary depending on the size of the file and your internet connection speed. Once the download is complete, you will have the full version of Ford ECAT on your computer.
    8. -
    - -

    How to Install and Activate Ford ECAT Torrent File

    - -

    After downloading Ford ECAT torrent file, you need to install and activate it on your computer. Here are the steps you need to follow:

    -

    - -
      -
    1. Extract the downloaded file using a software like WinRAR or 7-Zip. You will get a folder containing the installation files of Ford ECAT.
    2. -
    3. Run the setup.exe file and follow the instructions on the screen. You will need to choose a destination folder for Ford ECAT and agree to the terms and conditions.
    4. -
    5. Copy the patch files from the downloaded folder to the installation folder of Ford ECAT. The patch files are usually MG16.dll and Microcat.exe. They are used to bypass the activation process of Ford ECAT.
    6. -
    7. Change your system date to June 2014 or October 2015, depending on the version of Ford ECAT you downloaded. You can do this by going to your Control Panel > Date and Time > Change date and time settings.
    8. -
    9. Run Ford ECAT from your desktop or start menu. You will be able to access all the features and functions of Ford ECAT without any limitations.
    10. -
    - -

    The Benefits of Downloading Ford ECAT Torrent File

    - -

    Downloading Ford ECAT torrent file has many benefits for you as a Ford car owner or enthusiast. Here are some of them:

    - -
      -
    • You can save money by getting Ford ECAT for free instead of paying for a subscription or a license.
    • -
    • You can get access to the most updated and comprehensive information about Ford parts and services for all European models.
    • -
    • You can improve your efficiency and productivity by using Ford ECAT's advanced search and content functions.
    • -
    • You can connect to other systems of Ford and get more support and assistance.
    • -
    • You can enhance your skills and knowledge by learning from Ford ECAT's detailed diagrams and descriptions.
    • -
    - -

    The Risks of Downloading Ford ECAT Torrent File

    - -

    However, downloading Ford ECAT torrent file also has some risks that you should be aware of. Here are some of them:

    - -
      -
    • You may encounter viruses, malware, or other harmful programs that can damage your computer or steal your personal information.
    • -
    • You may face legal issues or penalties if you violate the copyright or license agreement of Ford ECAT.
    • -
    • You may get low-quality or outdated files that may not work properly or contain errors.
    • -
    • You may have difficulty installing or activating Ford ECAT if you do not follow the instructions correctly or if you do not have the required patch files.
    • -
    - -

    To avoid these risks, you should always use a trusted torrent site, a reliable torrent client, an antivirus software, and a VPN service to protect yourself and your computer.

    - -

    Conclusion

    - -

    Ford ECAT is an electronic parts catalog that provides information about all Ford vehicles produced for the European market. It is a useful tool for anyone who owns, repairs, or sells Ford cars, jeeps, or light commercial vehicles.

    - -

    To download Ford ECAT for free, you can use torrent sites that offer it as a torrent file. However, you should also be careful of some risks involved in downloading torrents, such as viruses, malware, legal issues, or low-quality files.

    - -

    If you want to learn more about ford ecat torrent download or other topics related to ford cars,visit our website today!

    -

    How to Use Ford ECAT Torrent File

    - -

    Once you have installed and activated Ford ECAT on your computer, you can start using it to find and order parts and services for your Ford vehicle. Here are some tips on how to use Ford ECAT effectively:

    - -
      -
    • Use the search function to find the part or service you need. You can search by part number, part name, vehicle model, vehicle identification number (VIN), or vehicle registration number (VRN).
    • -
    • Use the content function to view detailed information about the part or service you selected. You can view diagrams, descriptions, specifications, prices, availability, compatibility, and related parts or services.
    • -
    • Use the order function to place an order for the part or service you selected. You can choose the delivery method, payment method, and delivery address. You can also track the status of your order and receive notifications.
    • -
    • Use the help function to get assistance or support from Ford ECAT. You can access user manuals, tutorials, FAQs, contact details, and feedback forms.
    • -
    - -

    The Alternatives to Ford ECAT Torrent File

    - -

    If you are not comfortable with downloading Ford ECAT from torrent sites, or if you want to try other options, you can also get Ford ECAT from other sources. Here are some alternatives to Ford ECAT torrent file:

    - -
      -
    • You can buy Ford ECAT from an authorized dealer or distributor. You can find a list of dealers and distributors on the official website of Ford. You will need to pay a subscription fee or a license fee to use Ford ECAT.
    • -
    • You can use Microcat Ford Europe, which is another electronic parts catalog for Ford European vehicles. It has similar features and functions as Ford ECAT, but it is not updated as frequently. You can download Microcat Ford Europe from MOTORCARSOFT.COM.
    • -
    • You can use other online platforms that offer information about Ford parts and services, such as AutoZone, RockAuto, or PartsGeek. However, these platforms may not have as comprehensive and accurate data as Ford ECAT.
    • -
    - -

    Conclusion

    - -

    Ford ECAT is an electronic parts catalog that provides information about all Ford vehicles produced for the European market. It is a useful tool for anyone who owns, repairs, or sells Ford cars, jeeps, or light commercial vehicles.

    - -

    To download Ford ECAT for free, you can use torrent sites that offer it as a torrent file. However, you should also be careful of some risks involved in downloading torrents, such as viruses, malware, legal issues, or low-quality files.

    - -

    If you want to learn more about ford ecat torrent download or other topics related to ford cars,visit our website today!

    -

    The Advantages of Ford ECAT over Other Parts Catalogs

    - -

    Ford ECAT is not the only parts catalog available for Ford vehicles, but it is one of the best ones. Here are some advantages of Ford ECAT over other parts catalogs:

    - -
      -
    • Ford ECAT is updated regularly and contains the latest information about Ford parts and services. Other parts catalogs may be outdated or incomplete.
    • -
    • Ford ECAT covers all models of Ford vehicles produced for the European market. Other parts catalogs may not include some models or regions.
    • -
    • Ford ECAT has a user-friendly interface and a powerful search engine. Other parts catalogs may have a complex or confusing interface or a slow or inaccurate search engine.
    • -
    • Ford ECAT provides detailed diagrams and descriptions of each part and service. Other parts catalogs may only provide basic information or images.
    • -
    • Ford ECAT connects to other systems of Ford and allows you to order parts and services online. Other parts catalogs may not have this feature or may require additional software or hardware.
    • -
    - -

    The FAQs about Ford ECAT Torrent Download

    - -

    If you have any questions or doubts about Ford ECAT torrent download, you may find the answers in this section. Here are some frequently asked questions and their answers:

    - -
    -
    Is Ford ECAT torrent download legal?
    -
    Downloading Ford ECAT from torrent sites may violate the copyright or license agreement of Ford ECAT. Therefore, it is not legal and you may face legal consequences if you do so. However, the chances of getting caught or sued are low, as long as you use a VPN service and do not share the file with others.
    -
    Is Ford ECAT torrent download safe?
    -
    Downloading Ford ECAT from torrent sites may expose your computer to viruses, malware, or other harmful programs that can damage your computer or steal your personal information. Therefore, it is not safe and you should always use an antivirus software and scan the file before opening it.
    -
    Is Ford ECAT torrent download reliable?
    -
    Downloading Ford ECAT from torrent sites may result in low-quality or outdated files that may not work properly or contain errors. Therefore, it is not reliable and you should always check the file size, version, seeders, leechers, comments, and ratings before downloading it.
    -
    Is Ford ECAT torrent download worth it?
    -
    Downloading Ford ECAT from torrent sites may save you money and give you access to the best spare parts catalog for your Ford vehicle. Therefore, it may be worth it if you are willing to take the risks and follow the instructions carefully.
    -
    - -

    Conclusion

    - -

    Ford ECAT is an electronic parts catalog that provides information about all Ford vehicles produced for the European market. It is a useful tool for anyone who owns, repairs, or sells Ford cars, jeeps, or light commercial vehicles.

    - -

    To download Ford ECAT for free, you can use torrent sites that offer it as a torrent file. However, you should also be careful of some risks involved in downloading torrents, such as viruses, malware, legal issues, or low-quality files.

    - -

    If you want to learn more about ford ecat torrent download or other topics related to ford cars,visit our website today!

    -

    Conclusion

    - -

    Ford ECAT is an electronic parts catalog that provides information about all Ford vehicles produced for the European market. It is a useful tool for anyone who owns, repairs, or sells Ford cars, jeeps, or light commercial vehicles.

    - -

    To download Ford ECAT for free, you can use torrent sites that offer it as a torrent file. However, you should also be careful of some risks involved in downloading torrents, such as viruses, malware, legal issues, or low-quality files.

    - -

    If you want to learn more about ford ecat torrent download or other topics related to ford cars,visit our website today!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/IObit Start Menu 8 Pro 5.1.0.11 Keys.md b/spaces/inplisQlawa/anything-midjourney-v4-1/IObit Start Menu 8 Pro 5.1.0.11 Keys.md deleted file mode 100644 index 37442957f28445f009b53c32b5900a593abc9805..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/IObit Start Menu 8 Pro 5.1.0.11 Keys.md +++ /dev/null @@ -1,10 +0,0 @@ -

    IObit Start Menu 8 Pro 5.1.0.11 Keys


    Download ……… https://urlin.us/2uEy2v



    -
    -Key to help the following problem, How to delete a file,free space and protect with a password ( only let user delete ) on the iobit start menu 8 pro,( 8.0.13 build 518 ) once you install the software. - -In order to move the menu, the icon should be able to be found in the left-most column of the menu bar.Once you find the iobit start menu 8 pro icon, you can move the icon by dragging it.While dragging the icon, you can keep on holding the left mouse button down to move the icon. - -Key to help the following problem, How to delete a file,free 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Deutschland Spielt Unwrapper Exe Patch.md b/spaces/inreVtussa/clothingai/Examples/Deutschland Spielt Unwrapper Exe Patch.md deleted file mode 100644 index 050a0d24e64d936c7efb5ec7092c83f1933f5d2e..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Deutschland Spielt Unwrapper Exe Patch.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -

    Deutschland Spielt Unwrapper Exe Patch: A Must-Have Tool for PC Gamers

    - -

    If you are a fan of PC games, especially those from Deutschland Spielt, you might have encountered a problem: some of the games are locked and require a code to unlock them. This can be frustrating and annoying, especially if you have paid for the games and want to enjoy them fully. Fortunately, there is a solution: Deutschland Spielt Unwrapper Exe Patch.

    -

    deutschland spielt unwrapper exe patch


    Download Filehttps://tiurll.com/2uCiYI



    - -

    Deutschland Spielt Unwrapper Exe Patch is a tool that can unlock any game from Deutschland Spielt without requiring a code. It can also remove any ads or limitations that might interfere with your gaming experience. With this tool, you can enjoy all the games from Deutschland Spielt without any hassle or cost.

    - -

    What is Deutschland Spielt Unwrapper Exe Patch?

    - -

    Deutschland Spielt Unwrapper Exe Patch is a software program that can modify the executable files of the games from Deutschland Spielt. It can bypass the protection system that prevents the games from running without a code. It can also remove any ads or limitations that might affect the performance or quality of the games.

    - -

    Deutschland Spielt Unwrapper Exe Patch is compatible with any game from Deutschland Spielt, regardless of the genre or release date. It can unlock games such as Hidden Object, Puzzle, Adventure, Strategy, Simulation, and more. It can also work with any version of Windows, from XP to 10.

    - -

    How to Use Deutschland Spielt Unwrapper Exe Patch?

    - -

    Using Deutschland Spielt Unwrapper Exe Patch is very easy and simple. You just need to follow these steps:

    - -
      -
    1. Download Deutschland Spielt Unwrapper Exe Patch from one of the websites that offer it for free. You can find them by searching on Google or SoundCloud.
    2. -
    3. Extract the zip file that contains Deutschland Spielt Unwrapper Exe Patch. You will see a file named unwrapper.exe.
    4. -
    5. Run unwrapper.exe. You will see a window that asks you to select the game you want to unlock.
    6. -
    7. Browse your computer and find the executable file of the game you want to unlock. It usually has the name of the game followed by .exe. For example, Haus der 1000 Türen.exe.
    8. -
    9. Select the executable file of the game and click on Open. You will see a message that says Patching done!.
    10. -
    11. Close unwrapper.exe. You have successfully unlocked the game.
    12. -
    13. Run the game as usual. You will see that it does not ask for a code anymore. You will also see that there are no ads or limitations anymore.
    14. -
    15. Enjoy your game!
    16. -
    - -

    What are the Benefits of Using Deutschland Spielt Unwrapper Exe Patch?

    - -

    Using Deutschland Spielt Unwrapper Exe Patch has many benefits for PC gamers. Here are some of them:

    - -
      -
    • You can save money by not having to buy codes or subscriptions to unlock the games.
    • -
    • You can save time by not having to enter codes or wait for ads to finish.
    • -
    • You can improve your gaming experience by not having to deal with ads or limitations that might distract you or slow down your game.
    • -
    • You can access all the features and content of the games without any restriction or limitation.
    • -
    • You can support the developers of the games by playing their games and giving them feedback.
    • -
    - -

    Kesimpulan

    - -

    Deutschland Spielt Unwrapper Exe Patch adalah alat yang sangat berguna bagi para gamer PC, khususnya yang suka dengan game-game dari Deutschland Spielt. Alat ini bisa membuka semua game dari Deutschland Spielt tanpa memerlukan kode. Alat ini juga bisa menghapus semua iklan atau batasan yang bisa mengganggu pengalaman bermain Anda. Dengan alat ini, Anda bisa menikmati semua game dari Deutschland Spielt tanpa repot atau biaya.

    -

    - -

    Deutschland Spielt Unwrapper Exe Patch adalah program perangkat lunak yang bisa memodifikasi file executable dari game-game dari Deutschland Spielt. Alat ini bisa melewati sistem perlindungan yang mencegah game-game berjalan tanpa kode. Alat ini juga bisa menghapus semua iklan atau batasan yang bisa mempengaruhi kinerja atau kualitas game-game.

    - -

    Deutschland Spielt Unwrapper Exe Patch kompatibel dengan semua game dari Deutschland Spielt, tidak peduli genre atau tanggal rilisnya. Alat ini bisa membuka game-game seperti Hidden Object, Puzzle, Adventure, Strategy, Simulation, dan lain-lain. Alat ini juga bisa bekerja dengan semua versi Windows, dari XP sampai 10.

    - -

    Menggunakan Deutschland Spielt Unwrapper Exe Patch sangat mudah dan sederhana. Anda hanya perlu mengikuti langkah-langkah berikut:

    - -
      -
    1. Download Deutschland Spielt Unwrapper Exe Patch dari salah satu situs web yang menawarkannya secara gratis. Anda bisa menemukannya dengan mencari di Google atau SoundCloud.
    2. -
    3. Ekstrak file zip yang berisi Deutschland Spielt Unwrapper Exe Patch. Anda akan melihat file bernama unwrapper.exe.
    4. -
    5. Jalankan unwrapper.exe. Anda akan melihat jendela yang meminta Anda untuk memilih game yang ingin Anda buka.
    6. -
    7. Telusuri komputer Anda dan temukan file executable dari game yang ingin Anda buka. Biasanya memiliki nama game diikuti oleh .exe. Misalnya, Haus der 1000 Türen.exe.
    8. -
    9. Pilih file executable dari game dan klik pada Buka. Anda akan melihat pesan yang mengatakan Patching done!.
    10. -
    11. Tutup unwrapper.exe. Anda telah berhasil membuka game.
    12. -
    13. Jalankan game seperti biasa. Anda akan melihat bahwa tidak meminta kode lagi. Anda juga akan melihat bahwa tidak ada iklan atau batasan lagi.
    14. -
    15. Nikmati game Anda!
    16. -
    - -

    Menggunakan Deutschland Spielt Unwrapper Exe Patch memiliki banyak manfaat bagi para gamer PC. Berikut adalah beberapa di antaranya:

    - -
      -
    • Anda bisa menghemat uang dengan tidak harus membeli kode atau langganan untuk membuka game-game.
    • -
    • Anda bisa menghemat waktu dengan tidak harus memasukkan kode atau menunggu iklan selesai.
    • -
    • Anda bisa meningkatkan pengalaman bermain Anda dengan tidak harus berurusan dengan iklan atau batasan yang bisa mengganggu Anda atau memperlambat game Anda.
    • -
    • Anda bisa mengakses semua fitur dan konten dari game-game tanpa ada pembatasan atau batasan.
    • -
    • Anda bisa mendukung pengembang dari game-game dengan memainkan game-game mereka dan memberi mereka umpan balik.
    • -
    - -

    Jadi tunggu apa lagi? Segera download Deutschland Spielt Unwrapper Exe Patch dan mulailah memainkan semua game dari Deutschland Spielt tanpa repot atau biaya. Selamat bermain!

    -

    How to Uninstall Deutschland Spielt Unwrapper Exe Patch?

    - -

    If you want to uninstall Deutschland Spielt Unwrapper Exe Patch, you can do so by following these steps:

    - -
      -
    1. Find the folder where you have extracted Deutschland Spielt Unwrapper Exe Patch. You can use the search function on your computer to locate it.
    2. -
    3. Delete the folder and all its contents. You don't need them anymore.
    4. -
    5. Find the games that you have unlocked with Deutschland Spielt Unwrapper Exe Patch. You can use the search function on your computer to locate them.
    6. -
    7. Delete the games and all their files. You don't need them anymore.
    8. -
    9. Empty your recycle bin. This will permanently remove Deutschland Spielt Unwrapper Exe Patch and the unlocked games from your computer.
    10. -
    - -

    Dengan demikian, Anda bisa menghapus Deutschland Spielt Unwrapper Exe Patch dengan mudah dan cepat. Anda bisa mengembalikan ruang kosong pada komputer Anda dan menghindari masalah yang mungkin timbul karena menggunakan Deutschland Spielt Unwrapper Exe Patch.

    - -

    Frequently Asked Questions about Deutschland Spielt Unwrapper Exe Patch

    - -

    Here are some of the frequently asked questions about Deutschland Spielt Unwrapper Exe Patch and their answers:

    - -
      -
    • Q: Is Deutschland Spielt Unwrapper Exe Patch free?
    • -
    • A: Yes, Deutschland Spielt Unwrapper Exe Patch is free to download and use. You don't have to pay anything to use it.
    • -
    • Q: Is Deutschland Spielt Unwrapper Exe Patch safe?
    • -
    • A: Yes, Deutschland Spielt Unwrapper Exe Patch is safe to use. It does not contain any viruses, malware, or spyware that might harm your computer or steal your personal information. It is also tested and verified by many users who have used it without any problems or complaints.
    • -
    • Q: Is Deutschland Spielt Unwrapper Exe Patch legal?
    • -
    • A: Yes, Deutschland Spielt Unwrapper Exe Patch is legal to use. It does not violate any copyrights or trademarks of Deutschland Spielt or the game developers. It is also not a piracy tool that allows you to download or play games illegally. It is simply a tool that modifies the executable files of the games that you already own or have access to.
    • -
    • Q: What games can Deutschland Spielt Unwrapper Exe Patch unlock?
    • -
    • A: Deutschland Spielt Unwrapper Exe Patch can unlock any game from Deutschland Spielt, regardless of the genre or release date. It can unlock games such as Hidden Object, Puzzle, Adventure, Strategy, Simulation, and more.
    • -
    • Q: What versions of Windows can Deutschland Spielt Unwrapper Exe Patch work with?
    • -
    • A: Deutschland Spielt Unwrapper Exe Patch can work with any version of Windows, from XP to 10.
    • -
    • Q: How can I contact the creators of Deutschland Spielt Unwrapper Exe Patch?
    • -
    • A: You can contact the creators of Deutschland Spielt Unwrapper Exe Patch by visiting one of the websites that offer it for free. You can find them by searching on Google or SoundCloud. You can also leave comments or questions for them on their SoundCloud pages.
    • -
    - -

    Jika Anda memiliki pertanyaan lain tentang Deutschland Spielt Unwrapper Exe Patch, Anda bisa mencari jawabannya di internet atau bertanya kepada pengguna lain yang telah menggunakan alat ini.

    - -

    Kesimpulan

    - -

    Deutschland Spielt Unwrapper Exe Patch adalah alat yang sangat berguna bagi para gamer PC, khususnya yang suka dengan game-game dari Deutschland Spielt. Alat ini bisa membuka semua game dari Deutschland Spielt tanpa memerlukan kode. Alat ini juga bisa menghapus semua iklan atau batasan yang bisa mengganggu pengalaman bermain Anda. Dengan alat ini, Anda bisa menikmati semua game dari Deutschland Spielt tanpa repot atau biaya.

    - -

    Deutschland Spielt Unwrapper Exe Patch adalah program perangkat lunak yang bisa memodifikasi file executable dari game-game dari Deutschland Spielt. Alat ini bisa melewati sistem perlindungan yang mencegah game-game berjalan tanpa kode. Alat ini juga bisa menghapus semua iklan atau batasan yang bisa mempengaruhi kinerja atau kualitas game-game.

    - -

    Deutschland Spielt Unwrapper Exe Patch kompatibel dengan semua game dari Deutschland -

    Conclusion

    - -

    In conclusion, Deutschland Spielt Unwrapper Exe Patch is a tool that can help PC gamers enjoy the games from Deutschland Spielt without any hassle or cost. It can unlock any game from Deutschland Spielt without requiring a code. It can also remove any ads or restrictions that can interfere with the gaming experience. It is easy to use, safe, legal, and compatible with any version of Windows and any game from Deutschland Spielt.

    - -

    If you are a fan of Deutschland Spielt games, or if you want to try them out, you should download Deutschland Spielt Unwrapper Exe Patch and see for yourself how it works. You will be amazed by how much fun and satisfaction you can get from playing the games from Deutschland Spielt without any limitations or interruptions.

    - -

    However, you should also remember to use Deutschland Spielt Unwrapper Exe Patch responsibly and respectfully. You should not distribute or share Deutschland Spielt Unwrapper Exe Patch or the unlocked games with others. You should also not use Deutschland Spielt Unwrapper Exe Patch for commercial purposes or profit. You should also respect the rights and wishes of Deutschland Spielt and the game developers. You should support them by playing their games and giving them feedback.

    - -

    Deutschland Spielt Unwrapper Exe Patch is a tool that can make PC gaming more enjoyable and accessible for everyone. It is a tool that can unlock the full potential of the games from Deutschland Spielt. It is a tool that can make you happy.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/iremkrc/chatbot-demo/app.py b/spaces/iremkrc/chatbot-demo/app.py deleted file mode 100644 index a2cad34711b786699e39dd42947135a3ef3f15f8..0000000000000000000000000000000000000000 --- a/spaces/iremkrc/chatbot-demo/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import gradio as gr -import time -import openai -import os -from dotenv import load_dotenv - -# Load environment variables from the .env file -load_dotenv() - -#openai.api_key = os.environ["openai_api_key"] -openai.api_key = os.getenv("openai_api_key") -messages = [ {"role": "system", "content": - "You are acting as Uncle Iroh living in Avatar: The Last Airbender universe. Answer the following questions and give advices as if you are Uncle Iroh."} ] -chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are acting as Uncle Iroh living in Avatar: The Last Airbender universe. Answer the following questions as if you are Uncle Iroh."}, - {"role": "user", "content": "Who are you?"}, - {"role": "assistant", "content": "I am Iroh."}, - {"role": "user", "content": "I feel confused, what should I do now?"}, - {"role": "assistant", "content": "It is time for you to look inward, and start asking yourself the big questions. Who are you? And what do you want?"}, - {"role": "user", "content": "Can you give me an advice about life?"}, - {"role": "assistant", "content": "Life happens wherever you are, whether you make it or not."} - ] - ) - -with gr.Blocks() as demo: - chatbot = gr.Chatbot() - msg = gr.Textbox() - clear = gr.Button("Clear") - - def user(user_message, history): - messages.append( - {"role": "user", "content": "Act like Uncle Iroh from Avatar: The Last Airbender, answer following question and give advices: " + user_message}, - ) - return "", history + [[user_message, None]] - - def bot(history): - chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo", messages=messages - ) - bot_message = chat.choices[0].message.content - messages.append({"role": "assistant", "content": bot_message}) - history[-1][1] = "" - for character in bot_message: - history[-1][1] += character - time.sleep(0.05) - yield history - - msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - clear.click(lambda: None, None, chatbot, queue=False) - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/test/server_poll.py b/spaces/jackli888/stable-diffusion-webui/test/server_poll.py deleted file mode 100644 index 42d56a4caacfc40d686dc99668d72238392448cd..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/test/server_poll.py +++ /dev/null @@ -1,24 +0,0 @@ -import unittest -import requests -import time - - -def run_tests(proc, test_dir): - timeout_threshold = 240 - start_time = time.time() - while time.time()-start_time < timeout_threshold: - try: - requests.head("http://localhost:7860/") - break - except requests.exceptions.ConnectionError: - if proc.poll() is not None: - break - if proc.poll() is None: - if test_dir is None: - test_dir = "test" - suite = unittest.TestLoader().discover(test_dir, pattern="*_test.py", top_level_dir="test") - result = unittest.TextTestRunner(verbosity=2).run(suite) - return len(result.failures) + len(result.errors) - else: - print("Launch unsuccessful") - return 1 diff --git a/spaces/james-oldfield/PandA/networks/genforce/configs/stylegan_ffhq1024.py b/spaces/james-oldfield/PandA/networks/genforce/configs/stylegan_ffhq1024.py deleted file mode 100644 index 0da23abd89966c69d292334979d0ef6cbff1ab69..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/configs/stylegan_ffhq1024.py +++ /dev/null @@ -1,63 +0,0 @@ -# python3.7 -"""Configuration for training StyleGAN on FF-HQ (1024) dataset. - -All settings are particularly used for one replica (GPU), such as `batch_size` -and `num_workers`. -""" - -runner_type = 'StyleGANRunner' -gan_type = 'stylegan' -resolution = 1024 -batch_size = 4 -val_batch_size = 16 -total_img = 25000_000 - -# Training dataset is repeated at the beginning to avoid loading dataset -# repeatedly at the end of each epoch. This can save some I/O time. -data = dict( - num_workers=4, - repeat=500, - # train=dict(root_dir='data/ffhq', resolution=resolution, mirror=0.5), - # val=dict(root_dir='data/ffhq', resolution=resolution), - train=dict(root_dir='data/ffhq.zip', data_format='zip', - resolution=resolution, mirror=0.5), - val=dict(root_dir='data/ffhq.zip', data_format='zip', - resolution=resolution), -) - -controllers = dict( - RunningLogger=dict(every_n_iters=10), - ProgressScheduler=dict( - every_n_iters=1, init_res=8, minibatch_repeats=4, - lod_training_img=600_000, lod_transition_img=600_000, - batch_size_schedule=dict(res4=64, res8=32, res16=16, res32=8), - ), - Snapshoter=dict(every_n_iters=500, first_iter=True, num=200), - FIDEvaluator=dict(every_n_iters=5000, first_iter=True, num=50000), - Checkpointer=dict(every_n_iters=5000, first_iter=True), -) - -modules = dict( - discriminator=dict( - model=dict(gan_type=gan_type, resolution=resolution), - lr=dict(lr_type='FIXED'), - opt=dict(opt_type='Adam', base_lr=1e-3, betas=(0.0, 0.99)), - kwargs_train=dict(), - kwargs_val=dict(), - ), - generator=dict( - model=dict(gan_type=gan_type, resolution=resolution), - lr=dict(lr_type='FIXED'), - opt=dict(opt_type='Adam', base_lr=1e-3, betas=(0.0, 0.99)), - kwargs_train=dict(w_moving_decay=0.995, style_mixing_prob=0.9, - trunc_psi=1.0, trunc_layers=0, randomize_noise=True), - kwargs_val=dict(trunc_psi=1.0, trunc_layers=0, randomize_noise=False), - g_smooth_img=10_000, - ) -) - -loss = dict( - type='LogisticGANLoss', - d_loss_kwargs=dict(r1_gamma=10.0), - g_loss_kwargs=dict(), -) diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/training/networks_stylegan2.py b/spaces/james-oldfield/PandA/networks/stylegan3/training/networks_stylegan2.py deleted file mode 100644 index 1af65a5a3bd9e06c3cecb11ed5f3256ce29311eb..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/training/networks_stylegan2.py +++ /dev/null @@ -1,804 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Network architectures from the paper -"Analyzing and Improving the Image Quality of StyleGAN". -Matches the original implementation of configs E-F by Karras et al. at -https://github.com/NVlabs/stylegan2/blob/master/training/networks_stylegan2.py""" - -import numpy as np -import torch -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act -from torch_utils.ops import fma - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def modulated_conv2d( - x, # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - weight, # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - styles, # Modulation coefficients of shape [batch_size, in_channels]. - noise = None, # Optional noise tensor to add to the output activations. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - padding = 0, # Padding with respect to the upsampled image. - resample_filter = None, # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - demodulate = True, # Apply weight demodulation? - flip_weight = True, # False = convolution, True = correlation (matches torch.nn.functional.conv2d). - fused_modconv = True, # Perform modulation, convolution, and demodulation as a single fused operation? -): - batch_size = x.shape[0] - out_channels, in_channels, kh, kw = weight.shape - misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(styles, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight = weight * (1 / np.sqrt(in_channels * kh * kw) / weight.norm(float('inf'), dim=[1,2,3], keepdim=True)) # max_Ikk - styles = styles / styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if demodulate or fused_modconv: - w = weight.unsqueeze(0) # [NOIkk] - w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk] - if demodulate: - dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO] - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk] - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1) - x = conv2d_resample.conv2d_resample(x=x, w=weight.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1), noise.to(x.dtype)) - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - misc.assert_shape(x, [batch_size, in_channels, None, None]) - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - x = x.reshape(batch_size, -1, *x.shape[2:]) - if noise is not None: - x = x.add_(noise) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 1, # Learning rate multiplier. - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full([out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - kernel_size, # Width and height of the convolution kernel. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output to +-X, None = disable clamping. - channels_last = False, # Expect the input to have memory_format=channels_last? - trainable = True, # Update the weights of this layer during training? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, activation={self.activation:s},', - f'up={self.up}, down={self.down}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality, 0 = no latent. - c_dim, # Conditioning label (C) dimensionality, 0 = no label. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output, None = do not broadcast. - num_layers = 8, # Number of mapping layers. - embed_features = None, # Label embedding dimensionality, None = same as w_dim. - layer_features = None, # Number of intermediate features in the mapping layers, None = same as w_dim. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.998, # Decay for tracking the moving average of W during training, None = do not track. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer(in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if update_emas and self.w_avg_beta is not None: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this layer. - kernel_size = 3, # Convolution kernel size. - up = 1, # Integer upsampling factor. - use_noise = True, # Enable noise input? - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - channels_last = False, # Use channels_last format for the weights? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.act_gain = bias_act.activation_funcs[activation].def_gain - - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - if use_noise: - self.register_buffer('noise_const', torch.randn([resolution, resolution])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1): - assert noise_mode in ['random', 'const', 'none'] - in_resolution = self.resolution // self.up - misc.assert_shape(x, [None, self.in_channels, in_resolution, in_resolution]) - styles = self.affine(w) - - noise = None - if self.use_noise and noise_mode == 'random': - noise = torch.randn([x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength - if self.use_noise and noise_mode == 'const': - noise = self.noise_const * self.noise_strength - - flip_weight = (self.up == 1) # slightly faster - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up, - padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to(x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d},', - f'resolution={self.resolution:d}, up={self.up}, activation={self.activation:s}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.conv_clamp = conv_clamp - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - - def forward(self, x, w, fused_modconv=True): - styles = self.affine(w) * self.weight_gain - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, demodulate=False, fused_modconv=fused_modconv) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - return x - - def extra_repr(self): - return f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this block. - img_channels, # Number of output color channels. - is_last, # Is this the last block? - architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - fused_modconv_default = True, # Default value of fused_modconv. 'inference_only' = True for inference, False for training. - **layer_kwargs, # Arguments for SynthesisLayer. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.fused_modconv_default = fused_modconv_default - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - - if in_channels == 0: - self.const = torch.nn.Parameter(torch.randn([out_channels, resolution, resolution])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2, - resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs): - _ = update_emas # unused - misc.assert_shape(ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - if ws.device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - fused_modconv = self.fused_modconv_default - if fused_modconv == 'inference_only': - fused_modconv = (not self.training) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - misc.assert_shape(x, [None, self.in_channels, self.resolution // 2, self.resolution // 2]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - if img is not None: - misc.assert_shape(img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & (img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.num_fp16_res = num_fp16_res - self.block_resolutions = [2 ** i for i in range(2, self.img_resolution_log2 + 1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, **block_kwargs) - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, x, img, start, stop, **block_kwargs): - block_ws = [] - - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append(ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - # x = img = None - # if x is None: x = img = None - if stop is None: stop = len(self.block_resolutions) - - for res, cur_ws in zip(self.block_resolutions[start:stop], block_ws[start:stop]): - block = getattr(self, f'b{res}') - x, img = block(x, img, cur_ws, **block_kwargs) - - # x = img = None - # for res, cur_ws in zip(self.block_resolutions, block_ws): - # block = getattr(self, f'b{res}') - # x, img = block(x, img, cur_ws, **block_kwargs) - - return x, img - - def extra_repr(self): - return ' '.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_fp16_res={self.num_fp16_res:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality. - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs = {}, # Arguments for MappingNetwork. - **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - def forward(self, z, c, x=None, img=None, truncation_psi=1, truncation_cutoff=None, update_emas=False, **synthesis_kwargs): - ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas) - img = self.synthesis(ws, x=x, img=img, update_emas=update_emas, **synthesis_kwargs) - return img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - tmp_channels, # Number of intermediate channels. - out_channels, # Number of output channels. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - first_layer_idx, # Index of the first layer. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - freeze_layers = 0, # Freeze-D: Number of layers to freeze. - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - - self.num_layers = 0 - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False): - if (x if x is not None else img).device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - img = upfirdn2d.downsample2d(img, self.resample_filter) if self.architecture == 'skip' else None - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - y = x.reshape(G, -1, F, c, H, W) # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = y - y.mean(dim=0) # [GnFcHW] Subtract mean over group. - y = y.square().mean(dim=0) # [nFcHW] Calc variance over group. - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - y = y.mean(dim=[2,3,4]) # [nF] Take average over channels and pixels. - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - y = y.repeat(G, 1, H, W) # [NFHW] Replicate over group and pixels. - x = torch.cat([x, y], dim=1) # [NCHW] Append to input as new channels. - return x - - def extra_repr(self): - return f'group_size={self.group_size}, num_channels={self.num_channels:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - cmap_dim, # Dimensionality of mapped conditioning label, 0 = no label. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - mbstd_group_size = 4, # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_num_channels = 1, # Number of features for the minibatch standard deviation layer, 0 = disable. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - - if architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer(group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, kernel_size=3, activation=activation, conv_clamp=conv_clamp) - self.fc = FullyConnectedLayer(in_channels * (resolution ** 2), in_channels, activation=activation) - self.out = FullyConnectedLayer(in_channels, 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - misc.assert_shape(cmap, [None, self.cmap_dim]) - x = (x * cmap).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim)) - - assert x.dtype == dtype - return x - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, update_emas=False, **block_kwargs): - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -#---------------------------------------------------------------------------- diff --git a/spaces/jbilcke-hf/ai-clip-factory/Dockerfile b/spaces/jbilcke-hf/ai-clip-factory/Dockerfile deleted file mode 100644 index 4ef86c13e803b99af8527d40a42131c3b3fbc58c..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM node:20-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - -# Uncomment the following lines if you want to use a secret at buildtime, -# for example to access your private npm packages -# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \ -# $(cat /run/secrets/HF_EXAMPLE_SECRET) - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -# RUN yarn build - -# If you use yarn, comment out this line and use the line above -RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static -COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache -# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolation.ts b/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolation.ts deleted file mode 100644 index 8465cb0c278b1551ecbc2903e0861b2a7f05903e..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolation.ts +++ /dev/null @@ -1,25 +0,0 @@ -"use server" - -import { interpolateGradio } from "./interpolateGradio" -import { interpolateReplicate } from "./interpolateReplicate" - -const interpolationEngine = `${process.env.INTERPOLATION_ENGINE || ""}` - -export async function interpolateVideo(inputVideo: string): Promise { - if (!inputVideo?.length) { - throw new Error(`missing input video`) - } - - try { - - if (interpolationEngine === "STMFNET_REPLICATE") { - return interpolateReplicate(inputVideo) - } else if (interpolationEngine === "FILM_GRADIO") { - return interpolateGradio(inputVideo) - } else { - throw new Error(`unsupported interpolation engine "${interpolationEngine}"`) - } - } catch (err) { - throw new Error(`failed to interpolate the video ${err}`) - } -} \ No newline at end of file diff --git a/spaces/jknero/rembackkk/app.py b/spaces/jknero/rembackkk/app.py deleted file mode 100644 index 510f76ece5e16e96b84eddc6981124bf2ccf54a5..0000000000000000000000000000000000000000 --- a/spaces/jknero/rembackkk/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -os.system("/usr/local/bin/python -m pip install --upgrade pip") -import gradio as gr -from rembg import remove -import cv2 - -def inference(img): - input_img = cv2.imread(img) - output = remove(input_img[:, :, [2,1,0]]) - return output - -title = "REMOVEDOR DE FUNDO" - -description = "By: Giovanne😊" - -article = "

    Instagram: @gz_777x

    " - - -gr.Interface( - inference, - gr.inputs.Image(type="filepath", label="Input"), - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article - ).launch() \ No newline at end of file diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py deleted file mode 100644 index 77a6e05eae6a939ae7575ae70b7173644141fffe..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py +++ /dev/null @@ -1,56 +0,0 @@ -from encoder.data_objects.random_cycler import RandomCycler -from encoder.data_objects.speaker_batch import SpeakerBatch -from encoder.data_objects.speaker import Speaker -from encoder.params_data import partials_n_frames -from torch.utils.data import Dataset, DataLoader -from pathlib import Path - -# TODO: improve with a pool of speakers for data efficiency - -class SpeakerVerificationDataset(Dataset): - def __init__(self, datasets_root: Path): - self.root = datasets_root - speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()] - if len(speaker_dirs) == 0: - raise Exception("No speakers found. Make sure you are pointing to the directory " - "containing all preprocessed speaker directories.") - self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs] - self.speaker_cycler = RandomCycler(self.speakers) - - def __len__(self): - return int(1e10) - - def __getitem__(self, index): - return next(self.speaker_cycler) - - def get_logs(self): - log_string = "" - for log_fpath in self.root.glob("*.txt"): - with log_fpath.open("r") as log_file: - log_string += "".join(log_file.readlines()) - return log_string - - -class SpeakerVerificationDataLoader(DataLoader): - def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None, - batch_sampler=None, num_workers=0, pin_memory=False, timeout=0, - worker_init_fn=None): - self.utterances_per_speaker = utterances_per_speaker - - super().__init__( - dataset=dataset, - batch_size=speakers_per_batch, - shuffle=False, - sampler=sampler, - batch_sampler=batch_sampler, - num_workers=num_workers, - collate_fn=self.collate, - pin_memory=pin_memory, - drop_last=False, - timeout=timeout, - worker_init_fn=worker_init_fn - ) - - def collate(self, speakers): - return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames) - \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/openapi/models.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/openapi/models.py deleted file mode 100644 index 5f3bdbb2066a612a9d58af46c923f74734079868..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/openapi/models.py +++ /dev/null @@ -1,611 +0,0 @@ -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union - -from fastapi._compat import ( - PYDANTIC_V2, - CoreSchema, - GetJsonSchemaHandler, - JsonSchemaValue, - _model_rebuild, - with_info_plain_validator_function, -) -from fastapi.logger import logger -from pydantic import AnyUrl, BaseModel, Field -from typing_extensions import Annotated, Literal, TypedDict -from typing_extensions import deprecated as typing_deprecated - -try: - import email_validator - - assert email_validator # make autoflake ignore the unused import - from pydantic import EmailStr -except ImportError: # pragma: no cover - - class EmailStr(str): # type: ignore - @classmethod - def __get_validators__(cls) -> Iterable[Callable[..., Any]]: - yield cls.validate - - @classmethod - def validate(cls, v: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(v) - - @classmethod - def _validate(cls, __input_value: Any, _: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(__input_value) - - @classmethod - def __get_pydantic_json_schema__( - cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler - ) -> JsonSchemaValue: - return {"type": "string", "format": "email"} - - @classmethod - def __get_pydantic_core_schema__( - cls, source: Type[Any], handler: Callable[[Any], CoreSchema] - ) -> CoreSchema: - return with_info_plain_validator_function(cls._validate) - - -class Contact(BaseModel): - name: Optional[str] = None - url: Optional[AnyUrl] = None - email: Optional[EmailStr] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class License(BaseModel): - name: str - identifier: Optional[str] = None - url: Optional[AnyUrl] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Info(BaseModel): - title: str - summary: Optional[str] = None - description: Optional[str] = None - termsOfService: Optional[str] = None - contact: Optional[Contact] = None - license: Optional[License] = None - version: str - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ServerVariable(BaseModel): - enum: Annotated[Optional[List[str]], Field(min_length=1)] = None - default: str - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Server(BaseModel): - url: Union[AnyUrl, str] - description: Optional[str] = None - variables: Optional[Dict[str, ServerVariable]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Reference(BaseModel): - ref: str = Field(alias="$ref") - - -class Discriminator(BaseModel): - propertyName: str - mapping: Optional[Dict[str, str]] = None - - -class XML(BaseModel): - name: Optional[str] = None - namespace: Optional[str] = None - prefix: Optional[str] = None - attribute: Optional[bool] = None - wrapped: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ExternalDocumentation(BaseModel): - description: Optional[str] = None - url: AnyUrl - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Schema(BaseModel): - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-the-json-schema-core-vocabu - # Core Vocabulary - schema_: Optional[str] = Field(default=None, alias="$schema") - vocabulary: Optional[str] = Field(default=None, alias="$vocabulary") - id: Optional[str] = Field(default=None, alias="$id") - anchor: Optional[str] = Field(default=None, alias="$anchor") - dynamicAnchor: Optional[str] = Field(default=None, alias="$dynamicAnchor") - ref: Optional[str] = Field(default=None, alias="$ref") - dynamicRef: Optional[str] = Field(default=None, alias="$dynamicRef") - defs: Optional[Dict[str, "SchemaOrBool"]] = Field(default=None, alias="$defs") - comment: Optional[str] = Field(default=None, alias="$comment") - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-a-vocabulary-for-applying-s - # A Vocabulary for Applying Subschemas - allOf: Optional[List["SchemaOrBool"]] = None - anyOf: Optional[List["SchemaOrBool"]] = None - oneOf: Optional[List["SchemaOrBool"]] = None - not_: Optional["SchemaOrBool"] = Field(default=None, alias="not") - if_: Optional["SchemaOrBool"] = Field(default=None, alias="if") - then: Optional["SchemaOrBool"] = None - else_: Optional["SchemaOrBool"] = Field(default=None, alias="else") - dependentSchemas: Optional[Dict[str, "SchemaOrBool"]] = None - prefixItems: Optional[List["SchemaOrBool"]] = None - # TODO: uncomment and remove below when deprecating Pydantic v1 - # It generales a list of schemas for tuples, before prefixItems was available - # items: Optional["SchemaOrBool"] = None - items: Optional[Union["SchemaOrBool", List["SchemaOrBool"]]] = None - contains: Optional["SchemaOrBool"] = None - properties: Optional[Dict[str, "SchemaOrBool"]] = None - patternProperties: Optional[Dict[str, "SchemaOrBool"]] = None - additionalProperties: Optional["SchemaOrBool"] = None - propertyNames: Optional["SchemaOrBool"] = None - unevaluatedItems: Optional["SchemaOrBool"] = None - unevaluatedProperties: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-structural - # A Vocabulary for Structural Validation - type: Optional[str] = None - enum: Optional[List[Any]] = None - const: Optional[Any] = None - multipleOf: Optional[float] = Field(default=None, gt=0) - maximum: Optional[float] = None - exclusiveMaximum: Optional[float] = None - minimum: Optional[float] = None - exclusiveMinimum: Optional[float] = None - maxLength: Optional[int] = Field(default=None, ge=0) - minLength: Optional[int] = Field(default=None, ge=0) - pattern: Optional[str] = None - maxItems: Optional[int] = Field(default=None, ge=0) - minItems: Optional[int] = Field(default=None, ge=0) - uniqueItems: Optional[bool] = None - maxContains: Optional[int] = Field(default=None, ge=0) - minContains: Optional[int] = Field(default=None, ge=0) - maxProperties: Optional[int] = Field(default=None, ge=0) - minProperties: Optional[int] = Field(default=None, ge=0) - required: Optional[List[str]] = None - dependentRequired: Optional[Dict[str, Set[str]]] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-vocabularies-for-semantic-c - # Vocabularies for Semantic Content With "format" - format: Optional[str] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-the-conten - # A Vocabulary for the Contents of String-Encoded Data - contentEncoding: Optional[str] = None - contentMediaType: Optional[str] = None - contentSchema: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-basic-meta - # A Vocabulary for Basic Meta-Data Annotations - title: Optional[str] = None - description: Optional[str] = None - default: Optional[Any] = None - deprecated: Optional[bool] = None - readOnly: Optional[bool] = None - writeOnly: Optional[bool] = None - examples: Optional[List[Any]] = None - # Ref: OpenAPI 3.1.0: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#schema-object - # Schema Object - discriminator: Optional[Discriminator] = None - xml: Optional[XML] = None - externalDocs: Optional[ExternalDocumentation] = None - example: Annotated[ - Optional[Any], - typing_deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -# Ref: https://json-schema.org/draft/2020-12/json-schema-core.html#name-json-schema-documents -# A JSON Schema MUST be an object or a boolean. -SchemaOrBool = Union[Schema, bool] - - -class Example(TypedDict, total=False): - summary: Optional[str] - description: Optional[str] - value: Optional[Any] - externalValue: Optional[AnyUrl] - - if PYDANTIC_V2: # type: ignore [misc] - __pydantic_config__ = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterInType(Enum): - query = "query" - header = "header" - path = "path" - cookie = "cookie" - - -class Encoding(BaseModel): - contentType: Optional[str] = None - headers: Optional[Dict[str, Union["Header", Reference]]] = None - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class MediaType(BaseModel): - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - encoding: Optional[Dict[str, Encoding]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterBase(BaseModel): - description: Optional[str] = None - required: Optional[bool] = None - deprecated: Optional[bool] = None - # Serialization rules for simple scenarios - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - # Serialization rules for more complex scenarios - content: Optional[Dict[str, MediaType]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Parameter(ParameterBase): - name: str - in_: ParameterInType = Field(alias="in") - - -class Header(ParameterBase): - pass - - -class RequestBody(BaseModel): - description: Optional[str] = None - content: Dict[str, MediaType] - required: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Link(BaseModel): - operationRef: Optional[str] = None - operationId: Optional[str] = None - parameters: Optional[Dict[str, Union[Any, str]]] = None - requestBody: Optional[Union[Any, str]] = None - description: Optional[str] = None - server: Optional[Server] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Response(BaseModel): - description: str - headers: Optional[Dict[str, Union[Header, Reference]]] = None - content: Optional[Dict[str, MediaType]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Operation(BaseModel): - tags: Optional[List[str]] = None - summary: Optional[str] = None - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - operationId: Optional[str] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - requestBody: Optional[Union[RequestBody, Reference]] = None - # Using Any for Specification Extensions - responses: Optional[Dict[str, Union[Response, Any]]] = None - callbacks: Optional[Dict[str, Union[Dict[str, "PathItem"], Reference]]] = None - deprecated: Optional[bool] = None - security: Optional[List[Dict[str, List[str]]]] = None - servers: Optional[List[Server]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class PathItem(BaseModel): - ref: Optional[str] = Field(default=None, alias="$ref") - summary: Optional[str] = None - description: Optional[str] = None - get: Optional[Operation] = None - put: Optional[Operation] = None - post: Optional[Operation] = None - delete: Optional[Operation] = None - options: Optional[Operation] = None - head: Optional[Operation] = None - patch: Optional[Operation] = None - trace: Optional[Operation] = None - servers: Optional[List[Server]] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class SecuritySchemeType(Enum): - apiKey = "apiKey" - http = "http" - oauth2 = "oauth2" - openIdConnect = "openIdConnect" - - -class SecurityBase(BaseModel): - type_: SecuritySchemeType = Field(alias="type") - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class APIKeyIn(Enum): - query = "query" - header = "header" - cookie = "cookie" - - -class APIKey(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.apiKey, alias="type") - in_: APIKeyIn = Field(alias="in") - name: str - - -class HTTPBase(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.http, alias="type") - scheme: str - - -class HTTPBearer(HTTPBase): - scheme: Literal["bearer"] = "bearer" - bearerFormat: Optional[str] = None - - -class OAuthFlow(BaseModel): - refreshUrl: Optional[str] = None - scopes: Dict[str, str] = {} - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuthFlowImplicit(OAuthFlow): - authorizationUrl: str - - -class OAuthFlowPassword(OAuthFlow): - tokenUrl: str - - -class OAuthFlowClientCredentials(OAuthFlow): - tokenUrl: str - - -class OAuthFlowAuthorizationCode(OAuthFlow): - authorizationUrl: str - tokenUrl: str - - -class OAuthFlows(BaseModel): - implicit: Optional[OAuthFlowImplicit] = None - password: Optional[OAuthFlowPassword] = None - clientCredentials: Optional[OAuthFlowClientCredentials] = None - authorizationCode: Optional[OAuthFlowAuthorizationCode] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuth2(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.oauth2, alias="type") - flows: OAuthFlows - - -class OpenIdConnect(SecurityBase): - type_: SecuritySchemeType = Field( - default=SecuritySchemeType.openIdConnect, alias="type" - ) - openIdConnectUrl: str - - -SecurityScheme = Union[APIKey, HTTPBase, OAuth2, OpenIdConnect, HTTPBearer] - - -class Components(BaseModel): - schemas: Optional[Dict[str, Union[Schema, Reference]]] = None - responses: Optional[Dict[str, Union[Response, Reference]]] = None - parameters: Optional[Dict[str, Union[Parameter, Reference]]] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - requestBodies: Optional[Dict[str, Union[RequestBody, Reference]]] = None - headers: Optional[Dict[str, Union[Header, Reference]]] = None - securitySchemes: Optional[Dict[str, Union[SecurityScheme, Reference]]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - # Using Any for Specification Extensions - callbacks: Optional[Dict[str, Union[Dict[str, PathItem], Reference, Any]]] = None - pathItems: Optional[Dict[str, Union[PathItem, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Tag(BaseModel): - name: str - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OpenAPI(BaseModel): - openapi: str - info: Info - jsonSchemaDialect: Optional[str] = None - servers: Optional[List[Server]] = None - # Using Any for Specification Extensions - paths: Optional[Dict[str, Union[PathItem, Any]]] = None - webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None - components: Optional[Components] = None - security: Optional[List[Dict[str, List[str]]]] = None - tags: Optional[List[Tag]] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -_model_rebuild(Schema) -_model_rebuild(Operation) -_model_rebuild(Encoding) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/mapping.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/mapping.py deleted file mode 100644 index 2b75c2e41f6f741ed03bc674e4ba43921041d864..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/mapping.py +++ /dev/null @@ -1,247 +0,0 @@ -import array -import posixpath -import warnings -from collections.abc import MutableMapping -from functools import cached_property - -from .core import url_to_fs - - -class FSMap(MutableMapping): - """Wrap a FileSystem instance as a mutable wrapping. - - The keys of the mapping become files under the given root, and the - values (which must be bytes) the contents of those files. - - Parameters - ---------- - root: string - prefix for all the files - fs: FileSystem instance - check: bool (=True) - performs a touch at the location, to check for write access. - - Examples - -------- - >>> fs = FileSystem(**parameters) # doctest: +SKIP - >>> d = FSMap('my-data/path/', fs) # doctest: +SKIP - or, more likely - >>> d = fs.get_mapper('my-data/path/') - - >>> d['loc1'] = b'Hello World' # doctest: +SKIP - >>> list(d.keys()) # doctest: +SKIP - ['loc1'] - >>> d['loc1'] # doctest: +SKIP - b'Hello World' - """ - - def __init__(self, root, fs, check=False, create=False, missing_exceptions=None): - self.fs = fs - self.root = fs._strip_protocol(root).rstrip("/") - self._root_key_to_str = fs._strip_protocol(posixpath.join(root, "x"))[:-1] - if missing_exceptions is None: - missing_exceptions = ( - FileNotFoundError, - IsADirectoryError, - NotADirectoryError, - ) - self.missing_exceptions = missing_exceptions - self.check = check - self.create = create - if create: - if not self.fs.exists(root): - self.fs.mkdir(root) - if check: - if not self.fs.exists(root): - raise ValueError( - "Path %s does not exist. Create " - " with the ``create=True`` keyword" % root - ) - self.fs.touch(root + "/a") - self.fs.rm(root + "/a") - - @cached_property - def dirfs(self): - """dirfs instance that can be used with the same keys as the mapper""" - from .implementations.dirfs import DirFileSystem - - return DirFileSystem(path=self._root_key_to_str, fs=self.fs) - - def clear(self): - """Remove all keys below root - empties out mapping""" - try: - self.fs.rm(self.root, True) - self.fs.mkdir(self.root) - except: # noqa: E722 - pass - - def getitems(self, keys, on_error="raise"): - """Fetch multiple items from the store - - If the backend is async-able, this might proceed concurrently - - Parameters - ---------- - keys: list(str) - They keys to be fetched - on_error : "raise", "omit", "return" - If raise, an underlying exception will be raised (converted to KeyError - if the type is in self.missing_exceptions); if omit, keys with exception - will simply not be included in the output; if "return", all keys are - included in the output, but the value will be bytes or an exception - instance. - - Returns - ------- - dict(key, bytes|exception) - """ - keys2 = [self._key_to_str(k) for k in keys] - oe = on_error if on_error == "raise" else "return" - try: - out = self.fs.cat(keys2, on_error=oe) - if isinstance(out, bytes): - out = {keys2[0]: out} - except self.missing_exceptions as e: - raise KeyError from e - out = { - k: (KeyError() if isinstance(v, self.missing_exceptions) else v) - for k, v in out.items() - } - return { - key: out[k2] - for key, k2 in zip(keys, keys2) - if on_error == "return" or not isinstance(out[k2], BaseException) - } - - def setitems(self, values_dict): - """Set the values of multiple items in the store - - Parameters - ---------- - values_dict: dict(str, bytes) - """ - values = {self._key_to_str(k): maybe_convert(v) for k, v in values_dict.items()} - self.fs.pipe(values) - - def delitems(self, keys): - """Remove multiple keys from the store""" - self.fs.rm([self._key_to_str(k) for k in keys]) - - def _key_to_str(self, key): - """Generate full path for the key""" - if not isinstance(key, str): - # raise TypeError("key must be of type `str`, got `{type(key).__name__}`" - warnings.warn( - "from fsspec 2023.5 onward FSMap non-str keys will raise TypeError", - DeprecationWarning, - ) - if isinstance(key, list): - key = tuple(key) - key = str(key) - return f"{self._root_key_to_str}{key}" - - def _str_to_key(self, s): - """Strip path of to leave key name""" - return s[len(self.root) :].lstrip("/") - - def __getitem__(self, key, default=None): - """Retrieve data""" - k = self._key_to_str(key) - try: - result = self.fs.cat(k) - except self.missing_exceptions: - if default is not None: - return default - raise KeyError(key) - return result - - def pop(self, key, default=None): - """Pop data""" - result = self.__getitem__(key, default) - try: - del self[key] - except KeyError: - pass - return result - - def __setitem__(self, key, value): - """Store value in key""" - key = self._key_to_str(key) - self.fs.mkdirs(self.fs._parent(key), exist_ok=True) - self.fs.pipe_file(key, maybe_convert(value)) - - def __iter__(self): - return (self._str_to_key(x) for x in self.fs.find(self.root)) - - def __len__(self): - return len(self.fs.find(self.root)) - - def __delitem__(self, key): - """Remove key""" - try: - self.fs.rm(self._key_to_str(key)) - except: # noqa: E722 - raise KeyError - - def __contains__(self, key): - """Does key exist in mapping?""" - path = self._key_to_str(key) - return self.fs.exists(path) and self.fs.isfile(path) - - def __reduce__(self): - return FSMap, (self.root, self.fs, False, False, self.missing_exceptions) - - -def maybe_convert(value): - if isinstance(value, array.array) or hasattr(value, "__array__"): - # bytes-like things - if hasattr(value, "dtype") and value.dtype.kind in "Mm": - # The buffer interface doesn't support datetime64/timdelta64 numpy - # arrays - value = value.view("int64") - value = bytes(memoryview(value)) - return value - - -def get_mapper( - url="", - check=False, - create=False, - missing_exceptions=None, - alternate_root=None, - **kwargs, -): - """Create key-value interface for given URL and options - - The URL will be of the form "protocol://location" and point to the root - of the mapper required. All keys will be file-names below this location, - and their values the contents of each key. - - Also accepts compound URLs like zip::s3://bucket/file.zip , see ``fsspec.open``. - - Parameters - ---------- - url: str - Root URL of mapping - check: bool - Whether to attempt to read from the location before instantiation, to - check that the mapping does exist - create: bool - Whether to make the directory corresponding to the root before - instantiating - missing_exceptions: None or tuple - If given, these exception types will be regarded as missing keys and - return KeyError when trying to read data. By default, you get - (FileNotFoundError, IsADirectoryError, NotADirectoryError) - alternate_root: None or str - In cases of complex URLs, the parser may fail to pick the correct part - for the mapper root, so this arg can override - - Returns - ------- - ``FSMap`` instance, the dict-like key-value store. - """ - # Removing protocol here - could defer to each open() on the backend - fs, urlpath = url_to_fs(url, **kwargs) - root = alternate_root if alternate_root is not None else urlpath - return FSMap(root, fs, check, create, missing_exceptions=missing_exceptions) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/number.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/number.py deleted file mode 100644 index 6f6d0149ac50a0189ba1baed94cd0b33e0ccf071..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/number.py +++ /dev/null @@ -1,238 +0,0 @@ -"""gr.Number() component.""" - -from __future__ import annotations - -import math -import warnings -from typing import Callable, Literal - -import numpy as np -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import NumberSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.events import ( - Changeable, - Focusable, - Inputable, - Submittable, -) -from gradio.exceptions import Error -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Number( - FormComponent, - Changeable, - Inputable, - Submittable, - Focusable, - IOComponent, - NumberSerializable, - NeighborInterpretable, -): - """ - Creates a numeric field for user to enter numbers as input or display numeric output. - Preprocessing: passes field value as a {float} or {int} into the function, depending on `precision`. - Postprocessing: expects an {int} or {float} returned from the function and sets field value to it. - Examples-format: a {float} or {int} representing the number's value. - - Demos: tax_calculator, titanic_survival, blocks_simple_squares - """ - - def __init__( - self, - value: float | Callable | None = None, - *, - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - precision: int | None = None, - minimum: float | None = None, - maximum: float | None = None, - step: float = 1, - **kwargs, - ): - """ - Parameters: - value: default value. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, will be editable; if False, editing will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - precision: Precision to round input/output to. If set to 0, will round to nearest integer and convert type to int. If None, no rounding happens. - minimum: Minimum value. Only applied when component is used as an input. If a user provides a smaller value, a gr.Error exception is raised by the backend. - maximum: Maximum value. Only applied when component is used as an input. If a user provides a larger value, a gr.Error exception is raised by the backend. - step: The interval between allowed numbers in the component. Can be used along with optional parameters `minimum` and `maximum` to create a range of legal values starting from `minimum` and incrementing according to this parameter. - """ - self.precision = precision - self.minimum = minimum - self.maximum = maximum - self.step = step - - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - @staticmethod - def _round_to_precision(num: float | int, precision: int | None) -> float | int: - """ - Round to a given precision. - - If precision is None, no rounding happens. If 0, num is converted to int. - - Parameters: - num: Number to round. - precision: Precision to round to. - Returns: - rounded number - """ - if precision is None: - return float(num) - elif precision == 0: - return int(round(num, precision)) - else: - return round(num, precision) - - @staticmethod - def update( - value: float | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - minimum: float | None = None, - maximum: float | None = None, - step: float = 1, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - warnings.warn( - "Using the update method is deprecated. Simply return a new object instead, e.g. `return gr.Number(...)` instead of `return gr.Number.update(...)`." - ) - return { - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "minimum": minimum, - "maximum": maximum, - "step": step, - "interactive": interactive, - "__type__": "update", - } - - def preprocess(self, x: float | None) -> float | None: - """ - Parameters: - x: numeric input - Returns: - number representing function input - """ - if x is None: - return None - elif self.minimum is not None and x < self.minimum: - raise Error(f"Value {x} is less than minimum value {self.minimum}.") - elif self.maximum is not None and x > self.maximum: - raise Error(f"Value {x} is greater than maximum value {self.maximum}.") - return self._round_to_precision(x, self.precision) - - def postprocess(self, y: float | None) -> float | None: - """ - Any postprocessing needed to be performed on function output. - - Parameters: - y: numeric output - Returns: - number representing function output - """ - if y is None: - return None - return self._round_to_precision(y, self.precision) - - def set_interpret_parameters( - self, steps: int = 3, delta: float = 1, delta_type: str = "percent" - ): - """ - Calculates interpretation scores of numeric values close to the input number. - Parameters: - steps: Number of nearby values to measure in each direction (above and below the input number). - delta: Size of step in each direction between nearby values. - delta_type: "percent" if delta step between nearby values should be a calculated as a percent, or "absolute" if delta should be a constant step change. - """ - self.interpretation_steps = steps - self.interpretation_delta = delta - self.interpretation_delta_type = delta_type - return self - - def get_interpretation_neighbors(self, x: float | int) -> tuple[list[float], dict]: - x = self._round_to_precision(x, self.precision) - if self.interpretation_delta_type == "percent": - delta = 1.0 * self.interpretation_delta * x / 100 - elif self.interpretation_delta_type == "absolute": - delta = self.interpretation_delta - else: - delta = self.interpretation_delta - if self.precision == 0 and math.floor(delta) != delta: - raise ValueError( - f"Delta value {delta} is not an integer and precision=0. Cannot generate valid set of neighbors. " - "If delta_type='percent', pick a value of delta such that x * delta is an integer. " - "If delta_type='absolute', pick a value of delta that is an integer." - ) - # run_interpretation will preprocess the neighbors so no need to convert to int here - negatives = ( - np.array(x) + np.arange(-self.interpretation_steps, 0) * delta - ).tolist() - positives = ( - np.array(x) + np.arange(1, self.interpretation_steps + 1) * delta - ).tolist() - return negatives + positives, {} - - def get_interpretation_scores( - self, x: float, neighbors: list[float], scores: list[float | None], **kwargs - ) -> list[tuple[float, float | None]]: - """ - Returns: - Each tuple set represents a numeric value near the input and its corresponding interpretation score. - """ - interpretation = list(zip(neighbors, scores)) - interpretation.insert(int(len(interpretation) / 2), (x, None)) - return interpretation diff --git a/spaces/johnhelf/roop/roop/processors/frame/__init__.py b/spaces/johnhelf/roop/roop/processors/frame/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jone/Music_Source_Separation/bytesep/losses.py b/spaces/jone/Music_Source_Separation/bytesep/losses.py deleted file mode 100644 index 58e79fb10c3a6cec7493ddd77e9137ab7ddc1de3..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/losses.py +++ /dev/null @@ -1,106 +0,0 @@ -import math -from typing import Callable - -import torch -import torch.nn as nn -from torchlibrosa.stft import STFT - -from bytesep.models.pytorch_modules import Base - - -def l1(output: torch.Tensor, target: torch.Tensor, **kwargs) -> torch.Tensor: - r"""L1 loss. - - Args: - output: torch.Tensor - target: torch.Tensor - - Returns: - loss: torch.float - """ - return torch.mean(torch.abs(output - target)) - - -def l1_wav(output: torch.Tensor, target: torch.Tensor, **kwargs) -> torch.Tensor: - r"""L1 loss in the time-domain. - - Args: - output: torch.Tensor - target: torch.Tensor - - Returns: - loss: torch.float - """ - return l1(output, target) - - -class L1_Wav_L1_Sp(nn.Module, Base): - def __init__(self): - r"""L1 loss in the time-domain and L1 loss on the spectrogram.""" - super(L1_Wav_L1_Sp, self).__init__() - - self.window_size = 2048 - hop_size = 441 - center = True - pad_mode = "reflect" - window = "hann" - - self.stft = STFT( - n_fft=self.window_size, - hop_length=hop_size, - win_length=self.window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - def __call__( - self, output: torch.Tensor, target: torch.Tensor, **kwargs - ) -> torch.Tensor: - r"""L1 loss in the time-domain and on the spectrogram. - - Args: - output: torch.Tensor - target: torch.Tensor - - Returns: - loss: torch.float - """ - - # L1 loss in the time-domain. - wav_loss = l1_wav(output, target) - - # L1 loss on the spectrogram. - sp_loss = l1( - self.wav_to_spectrogram(output, eps=1e-8), - self.wav_to_spectrogram(target, eps=1e-8), - ) - - # sp_loss /= math.sqrt(self.window_size) - # sp_loss *= 1. - - # Total loss. - return wav_loss + sp_loss - - return sp_loss - - -def get_loss_function(loss_type: str) -> Callable: - r"""Get loss function. - - Args: - loss_type: str - - Returns: - loss function: Callable - """ - - if loss_type == "l1_wav": - return l1_wav - - elif loss_type == "l1_wav_l1_sp": - return L1_Wav_L1_Sp() - - else: - raise NotImplementedError diff --git a/spaces/justYu2001/furniture-detection/models/experimental.py b/spaces/justYu2001/furniture-detection/models/experimental.py deleted file mode 100644 index 223bbdeadb4dbbb305c3fd1ba4848aafc2339f6f..0000000000000000000000000000000000000000 --- a/spaces/justYu2001/furniture-detection/models/experimental.py +++ /dev/null @@ -1,262 +0,0 @@ -import numpy as np -import random -import torch -import torch.nn as nn - -from models.common import Conv, DWConv -from utils.google_utils import attempt_download - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super(CrossConv, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super(Sum, self).__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super(MixConv2d, self).__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super(Ensemble, self).__init__() - - def forward(self, x, augment=False): - y = [] - for module in self: - y.append(module(x, augment)[0]) - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - - - - -class ORT_NMS(torch.autograd.Function): - '''ONNX-Runtime NMS operation''' - @staticmethod - def forward(ctx, - boxes, - scores, - max_output_boxes_per_class=torch.tensor([100]), - iou_threshold=torch.tensor([0.45]), - score_threshold=torch.tensor([0.25])): - device = boxes.device - batch = scores.shape[0] - num_det = random.randint(0, 100) - batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device) - idxs = torch.arange(100, 100 + num_det).to(device) - zeros = torch.zeros((num_det,), dtype=torch.int64).to(device) - selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous() - selected_indices = selected_indices.to(torch.int64) - return selected_indices - - @staticmethod - def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold): - return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold) - - -class TRT_NMS(torch.autograd.Function): - '''TensorRT NMS operation''' - @staticmethod - def forward( - ctx, - boxes, - scores, - background_class=-1, - box_coding=1, - iou_threshold=0.45, - max_output_boxes=100, - plugin_version="1", - score_activation=0, - score_threshold=0.25, - ): - batch_size, num_boxes, num_classes = scores.shape - num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32) - det_boxes = torch.randn(batch_size, max_output_boxes, 4) - det_scores = torch.randn(batch_size, max_output_boxes) - det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32) - return num_det, det_boxes, det_scores, det_classes - - @staticmethod - def symbolic(g, - boxes, - scores, - background_class=-1, - box_coding=1, - iou_threshold=0.45, - max_output_boxes=100, - plugin_version="1", - score_activation=0, - score_threshold=0.25): - out = g.op("TRT::EfficientNMS_TRT", - boxes, - scores, - background_class_i=background_class, - box_coding_i=box_coding, - iou_threshold_f=iou_threshold, - max_output_boxes_i=max_output_boxes, - plugin_version_s=plugin_version, - score_activation_i=score_activation, - score_threshold_f=score_threshold, - outputs=4) - nums, boxes, scores, classes = out - return nums, boxes, scores, classes - - -class ONNX_ORT(nn.Module): - '''onnx module with ONNX-Runtime NMS operation.''' - def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=640, device=None): - super().__init__() - self.device = device if device else torch.device("cpu") - self.max_obj = torch.tensor([max_obj]).to(device) - self.iou_threshold = torch.tensor([iou_thres]).to(device) - self.score_threshold = torch.tensor([score_thres]).to(device) - self.max_wh = max_wh # if max_wh != 0 : non-agnostic else : agnostic - self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]], - dtype=torch.float32, - device=self.device) - - def forward(self, x): - boxes = x[:, :, :4] - conf = x[:, :, 4:5] - scores = x[:, :, 5:] - scores *= conf - boxes @= self.convert_matrix - max_score, category_id = scores.max(2, keepdim=True) - dis = category_id.float() * self.max_wh - nmsbox = boxes + dis - max_score_tp = max_score.transpose(1, 2).contiguous() - selected_indices = ORT_NMS.apply(nmsbox, max_score_tp, self.max_obj, self.iou_threshold, self.score_threshold) - X, Y = selected_indices[:, 0], selected_indices[:, 2] - selected_boxes = boxes[X, Y, :] - selected_categories = category_id[X, Y, :].float() - selected_scores = max_score[X, Y, :] - X = X.unsqueeze(1).float() - return torch.cat([X, selected_boxes, selected_categories, selected_scores], 1) - -class ONNX_TRT(nn.Module): - '''onnx module with TensorRT NMS operation.''' - def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None ,device=None): - super().__init__() - assert max_wh is None - self.device = device if device else torch.device('cpu') - self.background_class = -1, - self.box_coding = 1, - self.iou_threshold = iou_thres - self.max_obj = max_obj - self.plugin_version = '1' - self.score_activation = 0 - self.score_threshold = score_thres - - def forward(self, x): - boxes = x[:, :, :4] - conf = x[:, :, 4:5] - scores = x[:, :, 5:] - scores *= conf - num_det, det_boxes, det_scores, det_classes = TRT_NMS.apply(boxes, scores, self.background_class, self.box_coding, - self.iou_threshold, self.max_obj, - self.plugin_version, self.score_activation, - self.score_threshold) - return num_det, det_boxes, det_scores, det_classes - - -class End2End(nn.Module): - '''export onnx or tensorrt model with NMS operation.''' - def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None, device=None): - super().__init__() - device = device if device else torch.device('cpu') - assert isinstance(max_wh,(int)) or max_wh is None - self.model = model.to(device) - self.model.model[-1].end2end = True - self.patch_model = ONNX_TRT if max_wh is None else ONNX_ORT - self.end2end = self.patch_model(max_obj, iou_thres, score_thres, max_wh, device) - self.end2end.eval() - - def forward(self, x): - x = self.model(x) - x = self.end2end(x) - return x - - - - - -def attempt_load(weights, map_location=None): - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - # attempt_download(w) - ckpt = torch.load(w, map_location=map_location) # load - model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model - - # Compatibility updates - for m in model.modules(): - if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True # pytorch 1.7.0 compatibility - elif type(m) is nn.Upsample: - m.recompute_scale_factor = None # torch 1.11.0 compatibility - elif type(m) is Conv: - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - - if len(model) == 1: - return model[-1] # return model - else: - print('Ensemble created with %s\n' % weights) - for k in ['names', 'stride']: - setattr(model, k, getattr(model[-1], k)) - return model # return ensemble - - diff --git a/spaces/k1ngtai/MMS/README.md b/spaces/k1ngtai/MMS/README.md deleted file mode 100644 index 7905e99235f0254c82aba497f0c9e1728f5fa4a6..0000000000000000000000000000000000000000 --- a/spaces/k1ngtai/MMS/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MMS -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 -duplicated_from: facebook/MMS ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kdrkdrkdr/HutaoTTS/models.py b/spaces/kdrkdrkdr/HutaoTTS/models.py deleted file mode 100644 index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HutaoTTS/models.py +++ /dev/null @@ -1,540 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab != 0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab != 0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 1, "n_speakers have to be larger than 1." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/kepl/gpt/client/css/global.css b/spaces/kepl/gpt/client/css/global.css deleted file mode 100644 index 8de755e9df1b2c4ee74d18f00ce717b22c69db4b..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/client/css/global.css +++ /dev/null @@ -1,70 +0,0 @@ -@import url("https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap"); -* { - --font-1: "Inter", sans-serif; - --section-gap: 24px; - --border-radius-1: 8px; - margin: 0; - padding: 0; - box-sizing: border-box; - position: relative; - font-family: var(--font-1); -} - -.theme-light { - --colour-1: #f5f5f5; - --colour-2: #000000; - --colour-3: #474747; - --colour-4: #949494; - --colour-5: #ebebeb; - --colour-6: #dadada; - - --accent: #3a3a3a; - --blur-bg: #ffffff; - --blur-border: #dbdbdb; - --user-input: #282828; - --conversations: #666666; -} - -.theme-dark { - --colour-1: #181818; - --colour-2: #ccc; - --colour-3: #dadada; - --colour-4: #f0f0f0; - --colour-5: #181818; - --colour-6: #242424; - - --accent: #151718; - --blur-bg: #242627; - --blur-border: #242627; - --user-input: #f5f5f5; - --conversations: #555555; -} - -html, -body { - background: var(--colour-1); - color: var(--colour-3); -} - -ol, -ul { - padding-left: 20px; -} - -.shown { - display: flex !important; -} - -a:-webkit-any-link { - color: var(--accent); -} - -pre { - white-space: pre-wrap; -} - -@media screen and (max-height: 720px) { - :root { - --section-gap: 16px; - } -} diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/train.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/train.py deleted file mode 100644 index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/train.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import logging -import os - -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.utils.data.distributed -from torch.nn.utils import clip_grad_norm_ - -import losses -from backbones import get_model -from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX -from partial_fc import PartialFC -from utils.utils_amp import MaxClipGradScaler -from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint -from utils.utils_config import get_config -from utils.utils_logging import AverageMeter, init_logging - - -def main(args): - cfg = get_config(args.config) - try: - world_size = int(os.environ['WORLD_SIZE']) - rank = int(os.environ['RANK']) - dist.init_process_group('nccl') - except KeyError: - world_size = 1 - rank = 0 - dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size) - - local_rank = args.local_rank - torch.cuda.set_device(local_rank) - os.makedirs(cfg.output, exist_ok=True) - init_logging(rank, cfg.output) - - if cfg.rec == "synthetic": - train_set = SyntheticDataset(local_rank=local_rank) - else: - train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank) - - train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) - train_loader = DataLoaderX( - local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size, - sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True) - backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank) - - if cfg.resume: - try: - backbone_pth = os.path.join(cfg.output, "backbone.pth") - backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank))) - if rank == 0: - logging.info("backbone resume successfully!") - except (FileNotFoundError, KeyError, IndexError, RuntimeError): - if rank == 0: - logging.info("resume fail, backbone init successfully!") - - backbone = torch.nn.parallel.DistributedDataParallel( - module=backbone, broadcast_buffers=False, device_ids=[local_rank]) - backbone.train() - margin_softmax = losses.get_loss(cfg.loss) - module_partial_fc = PartialFC( - rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume, - batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes, - sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output) - - opt_backbone = torch.optim.SGD( - params=[{'params': backbone.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - opt_pfc = torch.optim.SGD( - params=[{'params': module_partial_fc.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - - num_image = len(train_set) - total_batch_size = cfg.batch_size * world_size - cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch - cfg.total_step = num_image // total_batch_size * cfg.num_epoch - - def lr_step_func(current_step): - cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch] - if current_step < cfg.warmup_step: - return current_step / cfg.warmup_step - else: - return 0.1 ** len([m for m in cfg.decay_step if m <= current_step]) - - scheduler_backbone = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_backbone, lr_lambda=lr_step_func) - scheduler_pfc = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_pfc, lr_lambda=lr_step_func) - - for key, value in cfg.items(): - num_space = 25 - len(key) - logging.info(": " + key + " " * num_space + str(value)) - - val_target = cfg.val_targets - callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec) - callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None) - callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output) - - loss = AverageMeter() - start_epoch = 0 - global_step = 0 - grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None - for epoch in range(start_epoch, cfg.num_epoch): - train_sampler.set_epoch(epoch) - for step, (img, label) in enumerate(train_loader): - global_step += 1 - features = F.normalize(backbone(img)) - x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc) - if cfg.fp16: - features.backward(grad_amp.scale(x_grad)) - grad_amp.unscale_(opt_backbone) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - grad_amp.step(opt_backbone) - grad_amp.update() - else: - features.backward(x_grad) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - opt_backbone.step() - - opt_pfc.step() - module_partial_fc.update() - opt_backbone.zero_grad() - opt_pfc.zero_grad() - loss.update(loss_v, 1) - callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp) - callback_verification(global_step, backbone) - scheduler_backbone.step() - scheduler_pfc.step() - callback_checkpoint(global_step, backbone, module_partial_fc) - dist.destroy_process_group() - - -if __name__ == "__main__": - torch.backends.cudnn.benchmark = True - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('config', type=str, help='py config file') - parser.add_argument('--local_rank', type=int, default=0, help='local_rank') - main(parser.parse_args()) diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/util/html.py b/spaces/kevinwang676/VoiceChangers/src/face3d/util/html.py deleted file mode 100644 index cc3262a1eafda34842e4dbad47bb6ba72f0c5a68..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/util/html.py +++ /dev/null @@ -1,86 +0,0 @@ -import dominate -from dominate.tags import meta, h3, table, tr, td, p, a, img, br -import os - - -class HTML: - """This HTML class allows us to save images and write texts into a single HTML file. - - It consists of functions such as (add a text header to the HTML file), - (add a row of images to the HTML file), and (save the HTML to the disk). - It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API. - """ - - def __init__(self, web_dir, title, refresh=0): - """Initialize the HTML classes - - Parameters: - web_dir (str) -- a directory that stores the webpage. HTML file will be created at /index.html; images will be saved at 0: - with self.doc.head: - meta(http_equiv="refresh", content=str(refresh)) - - def get_image_dir(self): - """Return the directory that stores images""" - return self.img_dir - - def add_header(self, text): - """Insert a header to the HTML file - - Parameters: - text (str) -- the header text - """ - with self.doc: - h3(text) - - def add_images(self, ims, txts, links, width=400): - """add images to the HTML file - - Parameters: - ims (str list) -- a list of image paths - txts (str list) -- a list of image names shown on the website - links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page - """ - self.t = table(border=1, style="table-layout: fixed;") # Insert a table - self.doc.add(self.t) - with self.t: - with tr(): - for im, txt, link in zip(ims, txts, links): - with td(style="word-wrap: break-word;", halign="center", valign="top"): - with p(): - with a(href=os.path.join('images', link)): - img(style="width:%dpx" % width, src=os.path.join('images', im)) - br() - p(txt) - - def save(self): - """save the current content to the HMTL file""" - html_file = '%s/index.html' % self.web_dir - f = open(html_file, 'wt') - f.write(self.doc.render()) - f.close() - - -if __name__ == '__main__': # we show an example usage here. - html = HTML('web/', 'test_html') - html.add_header('hello world') - - ims, txts, links = [], [], [] - for n in range(4): - ims.append('image_%d.png' % n) - txts.append('text_%d' % n) - links.append('image_%d.png' % n) - html.add_images(ims, txts, links) - html.save() diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer_preprocess_audio.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer_preprocess_audio.py deleted file mode 100644 index 51d92f91a485ea853957127bec9166420daed934..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer_preprocess_audio.py +++ /dev/null @@ -1,65 +0,0 @@ -from synthesizer.preprocess import preprocess_dataset -from synthesizer.hparams import hparams -from utils.argutils import print_args -from pathlib import Path -import argparse - - -recognized_datasets = [ - "aidatatang_200zh", - "magicdata", - "aishell3" -] - -if __name__ == "__main__": - print("This method is deprecaded and will not be longer supported, please use 'pre.py'") - parser = argparse.ArgumentParser( - description="Preprocesses audio files from datasets, encodes them as mel spectrograms " - "and writes them to the disk. Audio files are also saved, to be used by the " - "vocoder for training.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("datasets_root", type=Path, help=\ - "Path to the directory containing your LibriSpeech/TTS datasets.") - parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\ - "Path to the output directory that will contain the mel spectrograms, the audios and the " - "embeds. Defaults to /SV2TTS/synthesizer/") - parser.add_argument("-n", "--n_processes", type=int, default=None, help=\ - "Number of processes in parallel.") - parser.add_argument("-s", "--skip_existing", action="store_true", help=\ - "Whether to overwrite existing files with the same name. Useful if the preprocessing was " - "interrupted.") - parser.add_argument("--hparams", type=str, default="", help=\ - "Hyperparameter overrides as a comma-separated list of name-value pairs") - parser.add_argument("--no_trim", action="store_true", help=\ - "Preprocess audio without trimming silences (not recommended).") - parser.add_argument("--no_alignments", action="store_true", help=\ - "Use this option when dataset does not include alignments\ - (these are used to split long audio files into sub-utterances.)") - parser.add_argument("--dataset", type=str, default="aidatatang_200zh", help=\ - "Name of the dataset to process, allowing values: magicdata, aidatatang_200zh.") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "out_dir"): - args.out_dir = args.datasets_root.joinpath("SV2TTS", "synthesizer") - assert args.dataset in recognized_datasets, 'is not supported, please vote for it in https://github.com/babysor/MockingBird/issues/10' - # Create directories - assert args.datasets_root.exists() - args.out_dir.mkdir(exist_ok=True, parents=True) - - # Verify webrtcvad is available - if not args.no_trim: - try: - import webrtcvad - except: - raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables " - "noise removal and is recommended. Please install and try again. If installation fails, " - "use --no_trim to disable this error message.") - del args.no_trim - - # Preprocess the dataset - print_args(args, parser) - args.hparams = hparams.parse(args.hparams) - - preprocess_dataset(**vars(args)) \ No newline at end of file diff --git a/spaces/kottu/stabble_diffusion_sketch/app_base.py b/spaces/kottu/stabble_diffusion_sketch/app_base.py deleted file mode 100644 index c42af19963c5904e4d4ab7534fe63f738a4854f8..0000000000000000000000000000000000000000 --- a/spaces/kottu/stabble_diffusion_sketch/app_base.py +++ /dev/null @@ -1,274 +0,0 @@ -import os - -import gradio as gr -import PIL.Image -from diffusers.utils import load_image - -from model import ADAPTER_NAMES, Model -from utils import ( - DEFAULT_STYLE_NAME, - MAX_SEED, - STYLE_NAMES, - apply_style, - randomize_seed_fn, -) - -CACHE_EXAMPLES = os.environ.get("CACHE_EXAMPLES") == "1" - - -def create_demo(model: Model) -> gr.Blocks: - def run( - image: PIL.Image.Image, - prompt: str, - negative_prompt: str, - adapter_name: str, - style_name: str = DEFAULT_STYLE_NAME, - num_inference_steps: int = 30, - guidance_scale: float = 5.0, - adapter_conditioning_scale: float = 1.0, - adapter_conditioning_factor: float = 1.0, - seed: int = 0, - apply_preprocess: bool = True, - progress=gr.Progress(track_tqdm=True), - ) -> list[PIL.Image.Image]: - prompt, negative_prompt = apply_style(style_name, prompt, negative_prompt) - - return model.run( - image=image, - prompt=prompt, - negative_prompt=negative_prompt, - adapter_name=adapter_name, - num_inference_steps=num_inference_steps, - guidance_scale=guidance_scale, - adapter_conditioning_scale=adapter_conditioning_scale, - adapter_conditioning_factor=adapter_conditioning_factor, - seed=seed, - apply_preprocess=apply_preprocess, - ) - - def process_example( - image_url: str, - prompt: str, - adapter_name: str, - guidance_scale: float, - adapter_conditioning_scale: float, - seed: int, - apply_preprocess: bool, - ) -> list[PIL.Image.Image]: - image = load_image(image_url) - return run( - image=image, - prompt=prompt, - negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured", - adapter_name=adapter_name, - style_name="(No style)", - guidance_scale=guidance_scale, - adapter_conditioning_scale=adapter_conditioning_scale, - seed=seed, - apply_preprocess=apply_preprocess, - ) - - examples = [ - [ - "assets/org_canny.jpg", - "Mystical fairy in real, magic, 4k picture, high quality", - "canny", - 7.5, - 0.75, - 42, - True, - ], - [ - "assets/org_sketch.png", - "a robot, mount fuji in the background, 4k photo, highly detailed", - "sketch", - 7.5, - 1.0, - 42, - True, - ], - [ - "assets/org_lin.jpg", - "Ice dragon roar, 4k photo", - "lineart", - 7.5, - 0.8, - 42, - True, - ], - [ - "assets/org_mid.jpg", - "A photo of a room, 4k photo, highly detailed", - "depth-midas", - 7.5, - 1.0, - 42, - True, - ], - [ - "assets/org_zoe.jpg", - "A photo of a orchid, 4k photo, highly detailed", - "depth-zoe", - 5.0, - 1.0, - 42, - True, - ], - [ - "assets/people.jpg", - "A couple, 4k photo, highly detailed", - "openpose", - 5.0, - 1.0, - 42, - True, - ], - [ - "assets/depth-midas-image.png", - "stormtrooper lecture, 4k photo, highly detailed", - "depth-midas", - 7.5, - 1.0, - 42, - False, - ], - [ - "assets/openpose-image.png", - "spiderman, 4k photo, highly detailed", - "openpose", - 5.0, - 1.0, - 42, - False, - ], - ] - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Group(): - image = gr.Image(label="Input image", type="pil", height=600) - prompt = gr.Textbox(label="Prompt") - with gr.Row(): - adapter_name = gr.Dropdown(label="Adapter name", choices=ADAPTER_NAMES, value=ADAPTER_NAMES[0]) - style = gr.Dropdown(label="Style", choices=STYLE_NAMES, value=DEFAULT_STYLE_NAME) - run_button = gr.Button("Run") - with gr.Accordion("Advanced options", open=False): - apply_preprocess = gr.Checkbox(label="Apply preprocess", value=True) - negative_prompt = gr.Textbox( - label="Negative prompt", - value=" extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured", - ) - num_inference_steps = gr.Slider( - label="Number of steps", - minimum=1, - maximum=Model.MAX_NUM_INFERENCE_STEPS, - step=1, - value=25, - ) - guidance_scale = gr.Slider( - label="Guidance scale", - minimum=0.1, - maximum=30.0, - step=0.1, - value=5.0, - ) - adapter_conditioning_scale = gr.Slider( - label="Adapter conditioning scale", - minimum=0.5, - maximum=1, - step=0.1, - value=1.0, - ) - adapter_conditioning_factor = gr.Slider( - label="Adapter conditioning factor", - info="Fraction of timesteps for which adapter should be applied", - minimum=0.5, - maximum=1.0, - step=0.1, - value=1.0, - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=42, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=False) - with gr.Column(): - result = gr.Gallery(label="Result", columns=2, height=600, object_fit="scale-down", show_label=False) - - gr.Examples( - examples=examples, - inputs=[ - image, - prompt, - adapter_name, - guidance_scale, - adapter_conditioning_scale, - seed, - apply_preprocess, - ], - outputs=result, - fn=process_example, - cache_examples=CACHE_EXAMPLES, - ) - - inputs = [ - image, - prompt, - negative_prompt, - adapter_name, - style, - num_inference_steps, - guidance_scale, - adapter_conditioning_scale, - adapter_conditioning_factor, - seed, - apply_preprocess, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=run, - inputs=inputs, - outputs=result, - api_name=False, - ) - negative_prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=run, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=run, - inputs=inputs, - outputs=result, - api_name="run", - ) - - return demo - - -if __name__ == "__main__": - model = Model(ADAPTER_NAMES[0]) - demo = create_demo(model) - demo.queue(max_size=20).launch() \ No newline at end of file diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/mask_example.py b/spaces/kquote03/lama-video-watermark-remover/bin/mask_example.py deleted file mode 100644 index 59e25ca8eb3ed4141851c3af284fc66285444de0..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/mask_example.py +++ /dev/null @@ -1,14 +0,0 @@ -import matplotlib.pyplot as plt -from skimage import io -from skimage.transform import resize - -from saicinpainting.evaluation.masks.mask import SegmentationMask - -im = io.imread('imgs/ex4.jpg') -im = resize(im, (512, 1024), anti_aliasing=True) -mask_seg = SegmentationMask(num_variants_per_mask=10) -mask_examples = mask_seg.get_masks(im) -for i, example in enumerate(mask_examples): - plt.imshow(example) - plt.show() - plt.imsave(f'tmp/img_masks/{i}.png', example) diff --git a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/__init__.py b/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/__init__.py deleted file mode 100644 index f3b008fb13c5e8a84b1b785056e8c4f5226dc976..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ - -from .dataset import Dataset, TensorDataset, ConcatDataset -from .dataloader import DataLoader diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/validators.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/validators.py deleted file mode 100644 index 1488554f789526d8d85eb467250a64a64489362d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/validators.py +++ /dev/null @@ -1,720 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly useful validators. -""" - - -import operator -import re - -from contextlib import contextmanager -from re import Pattern - -from ._config import get_run_validators, set_run_validators -from ._make import _AndValidator, and_, attrib, attrs -from .converters import default_if_none -from .exceptions import NotCallableError - - -__all__ = [ - "and_", - "deep_iterable", - "deep_mapping", - "disabled", - "ge", - "get_disabled", - "gt", - "in_", - "instance_of", - "is_callable", - "le", - "lt", - "matches_re", - "max_len", - "min_len", - "not_", - "optional", - "provides", - "set_disabled", -] - - -def set_disabled(disabled): - """ - Globally disable or enable running validators. - - By default, they are run. - - :param disabled: If ``True``, disable running all validators. - :type disabled: bool - - .. warning:: - - This function is not thread-safe! - - .. versionadded:: 21.3.0 - """ - set_run_validators(not disabled) - - -def get_disabled(): - """ - Return a bool indicating whether validators are currently disabled or not. - - :return: ``True`` if validators are currently disabled. - :rtype: bool - - .. versionadded:: 21.3.0 - """ - return not get_run_validators() - - -@contextmanager -def disabled(): - """ - Context manager that disables running validators within its context. - - .. warning:: - - This context manager is not thread-safe! - - .. versionadded:: 21.3.0 - """ - set_run_validators(False) - try: - yield - finally: - set_run_validators(True) - - -@attrs(repr=False, slots=True, hash=True) -class _InstanceOfValidator: - type = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not isinstance(value, self.type): - raise TypeError( - "'{name}' must be {type!r} (got {value!r} that is a " - "{actual!r}).".format( - name=attr.name, - type=self.type, - actual=value.__class__, - value=value, - ), - attr, - self.type, - value, - ) - - def __repr__(self): - return "".format( - type=self.type - ) - - -def instance_of(type): - """ - A validator that raises a `TypeError` if the initializer is called - with a wrong type for this particular attribute (checks are performed using - `isinstance` therefore it's also valid to pass a tuple of types). - - :param type: The type to check for. - :type type: type or tuple of type - - :raises TypeError: With a human readable error message, the attribute - (of type `attrs.Attribute`), the expected type, and the value it - got. - """ - return _InstanceOfValidator(type) - - -@attrs(repr=False, frozen=True, slots=True) -class _MatchesReValidator: - pattern = attrib() - match_func = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not self.match_func(value): - raise ValueError( - "'{name}' must match regex {pattern!r}" - " ({value!r} doesn't)".format( - name=attr.name, pattern=self.pattern.pattern, value=value - ), - attr, - self.pattern, - value, - ) - - def __repr__(self): - return "".format( - pattern=self.pattern - ) - - -def matches_re(regex, flags=0, func=None): - r""" - A validator that raises `ValueError` if the initializer is called - with a string that doesn't match *regex*. - - :param regex: a regex string or precompiled pattern to match against - :param int flags: flags that will be passed to the underlying re function - (default 0) - :param callable func: which underlying `re` function to call. Valid options - are `re.fullmatch`, `re.search`, and `re.match`; the default ``None`` - means `re.fullmatch`. For performance reasons, the pattern is always - precompiled using `re.compile`. - - .. versionadded:: 19.2.0 - .. versionchanged:: 21.3.0 *regex* can be a pre-compiled pattern. - """ - valid_funcs = (re.fullmatch, None, re.search, re.match) - if func not in valid_funcs: - raise ValueError( - "'func' must be one of {}.".format( - ", ".join( - sorted( - e and e.__name__ or "None" for e in set(valid_funcs) - ) - ) - ) - ) - - if isinstance(regex, Pattern): - if flags: - raise TypeError( - "'flags' can only be used with a string pattern; " - "pass flags to re.compile() instead" - ) - pattern = regex - else: - pattern = re.compile(regex, flags) - - if func is re.match: - match_func = pattern.match - elif func is re.search: - match_func = pattern.search - else: - match_func = pattern.fullmatch - - return _MatchesReValidator(pattern, match_func) - - -@attrs(repr=False, slots=True, hash=True) -class _ProvidesValidator: - interface = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not self.interface.providedBy(value): - raise TypeError( - "'{name}' must provide {interface!r} which {value!r} " - "doesn't.".format( - name=attr.name, interface=self.interface, value=value - ), - attr, - self.interface, - value, - ) - - def __repr__(self): - return "".format( - interface=self.interface - ) - - -def provides(interface): - """ - A validator that raises a `TypeError` if the initializer is called - with an object that does not provide the requested *interface* (checks are - performed using ``interface.providedBy(value)`` (see `zope.interface - `_). - - :param interface: The interface to check for. - :type interface: ``zope.interface.Interface`` - - :raises TypeError: With a human readable error message, the attribute - (of type `attrs.Attribute`), the expected interface, and the - value it got. - - .. deprecated:: 23.1.0 - """ - import warnings - - warnings.warn( - "attrs's zope-interface support is deprecated and will be removed in, " - "or after, April 2024.", - DeprecationWarning, - stacklevel=2, - ) - return _ProvidesValidator(interface) - - -@attrs(repr=False, slots=True, hash=True) -class _OptionalValidator: - validator = attrib() - - def __call__(self, inst, attr, value): - if value is None: - return - - self.validator(inst, attr, value) - - def __repr__(self): - return "".format( - what=repr(self.validator) - ) - - -def optional(validator): - """ - A validator that makes an attribute optional. An optional attribute is one - which can be set to ``None`` in addition to satisfying the requirements of - the sub-validator. - - :param Callable | tuple[Callable] | list[Callable] validator: A validator - (or validators) that is used for non-``None`` values. - - .. versionadded:: 15.1.0 - .. versionchanged:: 17.1.0 *validator* can be a list of validators. - .. versionchanged:: 23.1.0 *validator* can also be a tuple of validators. - """ - if isinstance(validator, (list, tuple)): - return _OptionalValidator(_AndValidator(validator)) - - return _OptionalValidator(validator) - - -@attrs(repr=False, slots=True, hash=True) -class _InValidator: - options = attrib() - - def __call__(self, inst, attr, value): - try: - in_options = value in self.options - except TypeError: # e.g. `1 in "abc"` - in_options = False - - if not in_options: - raise ValueError( - "'{name}' must be in {options!r} (got {value!r})".format( - name=attr.name, options=self.options, value=value - ), - attr, - self.options, - value, - ) - - def __repr__(self): - return "".format( - options=self.options - ) - - -def in_(options): - """ - A validator that raises a `ValueError` if the initializer is called - with a value that does not belong in the options provided. The check is - performed using ``value in options``. - - :param options: Allowed options. - :type options: list, tuple, `enum.Enum`, ... - - :raises ValueError: With a human readable error message, the attribute (of - type `attrs.Attribute`), the expected options, and the value it - got. - - .. versionadded:: 17.1.0 - .. versionchanged:: 22.1.0 - The ValueError was incomplete until now and only contained the human - readable error message. Now it contains all the information that has - been promised since 17.1.0. - """ - return _InValidator(options) - - -@attrs(repr=False, slots=False, hash=True) -class _IsCallableValidator: - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not callable(value): - message = ( - "'{name}' must be callable " - "(got {value!r} that is a {actual!r})." - ) - raise NotCallableError( - msg=message.format( - name=attr.name, value=value, actual=value.__class__ - ), - value=value, - ) - - def __repr__(self): - return "" - - -def is_callable(): - """ - A validator that raises a `attrs.exceptions.NotCallableError` if the - initializer is called with a value for this particular attribute - that is not callable. - - .. versionadded:: 19.1.0 - - :raises attrs.exceptions.NotCallableError: With a human readable error - message containing the attribute (`attrs.Attribute`) name, - and the value it got. - """ - return _IsCallableValidator() - - -@attrs(repr=False, slots=True, hash=True) -class _DeepIterable: - member_validator = attrib(validator=is_callable()) - iterable_validator = attrib( - default=None, validator=optional(is_callable()) - ) - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if self.iterable_validator is not None: - self.iterable_validator(inst, attr, value) - - for member in value: - self.member_validator(inst, attr, member) - - def __repr__(self): - iterable_identifier = ( - "" - if self.iterable_validator is None - else f" {self.iterable_validator!r}" - ) - return ( - "" - ).format( - iterable_identifier=iterable_identifier, - member=self.member_validator, - ) - - -def deep_iterable(member_validator, iterable_validator=None): - """ - A validator that performs deep validation of an iterable. - - :param member_validator: Validator(s) to apply to iterable members - :param iterable_validator: Validator to apply to iterable itself - (optional) - - .. versionadded:: 19.1.0 - - :raises TypeError: if any sub-validators fail - """ - if isinstance(member_validator, (list, tuple)): - member_validator = and_(*member_validator) - return _DeepIterable(member_validator, iterable_validator) - - -@attrs(repr=False, slots=True, hash=True) -class _DeepMapping: - key_validator = attrib(validator=is_callable()) - value_validator = attrib(validator=is_callable()) - mapping_validator = attrib(default=None, validator=optional(is_callable())) - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if self.mapping_validator is not None: - self.mapping_validator(inst, attr, value) - - for key in value: - self.key_validator(inst, attr, key) - self.value_validator(inst, attr, value[key]) - - def __repr__(self): - return ( - "" - ).format(key=self.key_validator, value=self.value_validator) - - -def deep_mapping(key_validator, value_validator, mapping_validator=None): - """ - A validator that performs deep validation of a dictionary. - - :param key_validator: Validator to apply to dictionary keys - :param value_validator: Validator to apply to dictionary values - :param mapping_validator: Validator to apply to top-level mapping - attribute (optional) - - .. versionadded:: 19.1.0 - - :raises TypeError: if any sub-validators fail - """ - return _DeepMapping(key_validator, value_validator, mapping_validator) - - -@attrs(repr=False, frozen=True, slots=True) -class _NumberValidator: - bound = attrib() - compare_op = attrib() - compare_func = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not self.compare_func(value, self.bound): - raise ValueError( - "'{name}' must be {op} {bound}: {value}".format( - name=attr.name, - op=self.compare_op, - bound=self.bound, - value=value, - ) - ) - - def __repr__(self): - return "".format( - op=self.compare_op, bound=self.bound - ) - - -def lt(val): - """ - A validator that raises `ValueError` if the initializer is called - with a number larger or equal to *val*. - - :param val: Exclusive upper bound for values - - .. versionadded:: 21.3.0 - """ - return _NumberValidator(val, "<", operator.lt) - - -def le(val): - """ - A validator that raises `ValueError` if the initializer is called - with a number greater than *val*. - - :param val: Inclusive upper bound for values - - .. versionadded:: 21.3.0 - """ - return _NumberValidator(val, "<=", operator.le) - - -def ge(val): - """ - A validator that raises `ValueError` if the initializer is called - with a number smaller than *val*. - - :param val: Inclusive lower bound for values - - .. versionadded:: 21.3.0 - """ - return _NumberValidator(val, ">=", operator.ge) - - -def gt(val): - """ - A validator that raises `ValueError` if the initializer is called - with a number smaller or equal to *val*. - - :param val: Exclusive lower bound for values - - .. versionadded:: 21.3.0 - """ - return _NumberValidator(val, ">", operator.gt) - - -@attrs(repr=False, frozen=True, slots=True) -class _MaxLengthValidator: - max_length = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if len(value) > self.max_length: - raise ValueError( - "Length of '{name}' must be <= {max}: {len}".format( - name=attr.name, max=self.max_length, len=len(value) - ) - ) - - def __repr__(self): - return f"" - - -def max_len(length): - """ - A validator that raises `ValueError` if the initializer is called - with a string or iterable that is longer than *length*. - - :param int length: Maximum length of the string or iterable - - .. versionadded:: 21.3.0 - """ - return _MaxLengthValidator(length) - - -@attrs(repr=False, frozen=True, slots=True) -class _MinLengthValidator: - min_length = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if len(value) < self.min_length: - raise ValueError( - "Length of '{name}' must be => {min}: {len}".format( - name=attr.name, min=self.min_length, len=len(value) - ) - ) - - def __repr__(self): - return f"" - - -def min_len(length): - """ - A validator that raises `ValueError` if the initializer is called - with a string or iterable that is shorter than *length*. - - :param int length: Minimum length of the string or iterable - - .. versionadded:: 22.1.0 - """ - return _MinLengthValidator(length) - - -@attrs(repr=False, slots=True, hash=True) -class _SubclassOfValidator: - type = attrib() - - def __call__(self, inst, attr, value): - """ - We use a callable class to be able to change the ``__repr__``. - """ - if not issubclass(value, self.type): - raise TypeError( - "'{name}' must be a subclass of {type!r} " - "(got {value!r}).".format( - name=attr.name, - type=self.type, - value=value, - ), - attr, - self.type, - value, - ) - - def __repr__(self): - return "".format( - type=self.type - ) - - -def _subclass_of(type): - """ - A validator that raises a `TypeError` if the initializer is called - with a wrong type for this particular attribute (checks are performed using - `issubclass` therefore it's also valid to pass a tuple of types). - - :param type: The type to check for. - :type type: type or tuple of types - - :raises TypeError: With a human readable error message, the attribute - (of type `attrs.Attribute`), the expected type, and the value it - got. - """ - return _SubclassOfValidator(type) - - -@attrs(repr=False, slots=True, hash=True) -class _NotValidator: - validator = attrib() - msg = attrib( - converter=default_if_none( - "not_ validator child '{validator!r}' " - "did not raise a captured error" - ) - ) - exc_types = attrib( - validator=deep_iterable( - member_validator=_subclass_of(Exception), - iterable_validator=instance_of(tuple), - ), - ) - - def __call__(self, inst, attr, value): - try: - self.validator(inst, attr, value) - except self.exc_types: - pass # suppress error to invert validity - else: - raise ValueError( - self.msg.format( - validator=self.validator, - exc_types=self.exc_types, - ), - attr, - self.validator, - value, - self.exc_types, - ) - - def __repr__(self): - return ( - "" - ).format( - what=self.validator, - exc_types=self.exc_types, - ) - - -def not_(validator, *, msg=None, exc_types=(ValueError, TypeError)): - """ - A validator that wraps and logically 'inverts' the validator passed to it. - It will raise a `ValueError` if the provided validator *doesn't* raise a - `ValueError` or `TypeError` (by default), and will suppress the exception - if the provided validator *does*. - - Intended to be used with existing validators to compose logic without - needing to create inverted variants, for example, ``not_(in_(...))``. - - :param validator: A validator to be logically inverted. - :param msg: Message to raise if validator fails. - Formatted with keys ``exc_types`` and ``validator``. - :type msg: str - :param exc_types: Exception type(s) to capture. - Other types raised by child validators will not be intercepted and - pass through. - - :raises ValueError: With a human readable error message, - the attribute (of type `attrs.Attribute`), - the validator that failed to raise an exception, - the value it got, - and the expected exception types. - - .. versionadded:: 22.2.0 - """ - try: - exc_types = tuple(exc_types) - except TypeError: - exc_types = (exc_types,) - return _NotValidator(validator, msg, exc_types) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/local.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/local.py deleted file mode 100644 index 01a50ee42504b401c6c470ae00ee023f11c5308d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/local.py +++ /dev/null @@ -1,423 +0,0 @@ -import datetime -import io -import logging -import os -import os.path as osp -import posixpath -import re -import shutil -import stat -import tempfile - -from fsspec import AbstractFileSystem -from fsspec.compression import compr -from fsspec.core import get_compression -from fsspec.utils import isfilelike, stringify_path - -logger = logging.getLogger("fsspec.local") - - -class LocalFileSystem(AbstractFileSystem): - """Interface to files on local storage - - Parameters - ---------- - auto_mkdir: bool - Whether, when opening a file, the directory containing it should - be created (if it doesn't already exist). This is assumed by pyarrow - code. - """ - - root_marker = "/" - protocol = "file" - local_file = True - - def __init__(self, auto_mkdir=False, **kwargs): - super().__init__(**kwargs) - self.auto_mkdir = auto_mkdir - - @property - def fsid(self): - return "local" - - def mkdir(self, path, create_parents=True, **kwargs): - path = self._strip_protocol(path) - if self.exists(path): - raise FileExistsError(path) - if create_parents: - self.makedirs(path, exist_ok=True) - else: - os.mkdir(path, **kwargs) - - def makedirs(self, path, exist_ok=False): - path = self._strip_protocol(path) - os.makedirs(path, exist_ok=exist_ok) - - def rmdir(self, path): - path = self._strip_protocol(path) - os.rmdir(path) - - def ls(self, path, detail=False, **kwargs): - path = self._strip_protocol(path) - if detail: - with os.scandir(path) as it: - return [self.info(f) for f in it] - else: - return [posixpath.join(path, f) for f in os.listdir(path)] - - def glob(self, path, **kwargs): - path = self._strip_protocol(path) - return super().glob(path, **kwargs) - - def info(self, path, **kwargs): - if isinstance(path, os.DirEntry): - # scandir DirEntry - out = path.stat(follow_symlinks=False) - link = path.is_symlink() - if path.is_dir(follow_symlinks=False): - t = "directory" - elif path.is_file(follow_symlinks=False): - t = "file" - else: - t = "other" - path = self._strip_protocol(path.path) - else: - # str or path-like - path = self._strip_protocol(path) - out = os.stat(path, follow_symlinks=False) - link = stat.S_ISLNK(out.st_mode) - if link: - out = os.stat(path, follow_symlinks=True) - if stat.S_ISDIR(out.st_mode): - t = "directory" - elif stat.S_ISREG(out.st_mode): - t = "file" - else: - t = "other" - result = { - "name": path, - "size": out.st_size, - "type": t, - "created": out.st_ctime, - "islink": link, - } - for field in ["mode", "uid", "gid", "mtime", "ino", "nlink"]: - result[field] = getattr(out, "st_" + field) - if result["islink"]: - result["destination"] = os.readlink(path) - try: - out2 = os.stat(path, follow_symlinks=True) - result["size"] = out2.st_size - except IOError: - result["size"] = 0 - return result - - def lexists(self, path, **kwargs): - return osp.lexists(path) - - def cp_file(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1).rstrip("/") - path2 = self._strip_protocol(path2).rstrip("/") - if self.auto_mkdir: - self.makedirs(self._parent(path2), exist_ok=True) - if self.isfile(path1): - shutil.copyfile(path1, path2) - elif self.isdir(path1): - self.mkdirs(path2, exist_ok=True) - else: - raise FileNotFoundError(path1) - - def get_file(self, path1, path2, callback=None, **kwargs): - if isfilelike(path2): - with open(path1, "rb") as f: - shutil.copyfileobj(f, path2) - else: - return self.cp_file(path1, path2, **kwargs) - - def put_file(self, path1, path2, callback=None, **kwargs): - return self.cp_file(path1, path2, **kwargs) - - def mv_file(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1).rstrip("/") - path2 = self._strip_protocol(path2).rstrip("/") - shutil.move(path1, path2) - - def link(self, src, dst, **kwargs): - src = self._strip_protocol(src) - dst = self._strip_protocol(dst) - os.link(src, dst, **kwargs) - - def symlink(self, src, dst, **kwargs): - src = self._strip_protocol(src) - dst = self._strip_protocol(dst) - os.symlink(src, dst, **kwargs) - - def islink(self, path) -> bool: - return os.path.islink(self._strip_protocol(path)) - - def rm_file(self, path): - os.remove(path) - - def rm(self, path, recursive=False, maxdepth=None): - if not isinstance(path, list): - path = [path] - - for p in path: - p = self._strip_protocol(p).rstrip("/") - if recursive and self.isdir(p): - - if osp.abspath(p) == os.getcwd(): - raise ValueError("Cannot delete current working directory") - shutil.rmtree(p) - else: - os.remove(p) - - def unstrip_protocol(self, name): - name = self._strip_protocol(name) # normalise for local/win/... - return f"file://{name}" - - def _open(self, path, mode="rb", block_size=None, **kwargs): - path = self._strip_protocol(path) - if self.auto_mkdir and "w" in mode: - self.makedirs(self._parent(path), exist_ok=True) - return LocalFileOpener(path, mode, fs=self, **kwargs) - - def touch(self, path, truncate=True, **kwargs): - path = self._strip_protocol(path) - if self.auto_mkdir: - self.makedirs(self._parent(path), exist_ok=True) - if self.exists(path): - os.utime(path, None) - else: - open(path, "a").close() - if truncate: - os.truncate(path, 0) - - def created(self, path): - info = self.info(path=path) - return datetime.datetime.utcfromtimestamp(info["created"]) - - def modified(self, path): - info = self.info(path=path) - return datetime.datetime.utcfromtimestamp(info["mtime"]) - - @classmethod - def _parent(cls, path): - path = cls._strip_protocol(path).rstrip("/") - if "/" in path: - return path.rsplit("/", 1)[0] - else: - return cls.root_marker - - @classmethod - def _strip_protocol(cls, path): - path = stringify_path(path) - if path.startswith("file://"): - path = path[7:] - elif path.startswith("file:"): - path = path[5:] - return make_path_posix(path).rstrip("/") or cls.root_marker - - def _isfilestore(self): - # Inheriting from DaskFileSystem makes this False (S3, etc. were) - # the original motivation. But we are a posix-like file system. - # See https://github.com/dask/dask/issues/5526 - return True - - def chmod(self, path, mode): - path = stringify_path(path) - return os.chmod(path, mode) - - -def make_path_posix(path, sep=os.sep): - """Make path generic""" - if isinstance(path, (list, set, tuple)): - return type(path)(make_path_posix(p) for p in path) - if "~" in path: - path = osp.expanduser(path) - if sep == "/": - # most common fast case for posix - if path.startswith("/"): - return path - if path.startswith("./"): - path = path[2:] - return os.getcwd() + "/" + path - if ( - (sep not in path and "/" not in path) - or (sep == "/" and not path.startswith("/")) - or (sep == "\\" and ":" not in path and not path.startswith("\\\\")) - ): - # relative path like "path" or "rel\\path" (win) or rel/path" - if os.sep == "\\": - # abspath made some more '\\' separators - return make_path_posix(osp.abspath(path)) - else: - return os.getcwd() + "/" + path - if path.startswith("file://"): - path = path[7:] - if re.match("/[A-Za-z]:", path): - # for windows file URI like "file:///C:/folder/file" - # or "file:///C:\\dir\\file" - path = path[1:].replace("\\", "/").replace("//", "/") - if path.startswith("\\\\"): - # special case for windows UNC/DFS-style paths, do nothing, - # just flip the slashes around (case below does not work!) - return path.replace("\\", "/") - if re.match("[A-Za-z]:", path): - # windows full path like "C:\\local\\path" - return path.lstrip("\\").replace("\\", "/").replace("//", "/") - if path.startswith("\\"): - # windows network path like "\\server\\path" - return "/" + path.lstrip("\\").replace("\\", "/").replace("//", "/") - return path - - -def trailing_sep(path): - """Return True if the path ends with a path separator. - - A forward slash is always considered a path separator, even on Operating - Systems that normally use a backslash. - """ - # TODO: if all incoming paths were posix-compliant then separator would - # always be a forward slash, simplifying this function. - # See https://github.com/fsspec/filesystem_spec/pull/1250 - return path.endswith(os.sep) or (os.altsep is not None and path.endswith(os.altsep)) - - -def trailing_sep_maybe_asterisk(path): - """Return True if the path ends with a path separator and optionally an - asterisk. - - A forward slash is always considered a path separator, even on Operating - Systems that normally use a backslash. - """ - # TODO: if all incoming paths were posix-compliant then separator would - # always be a forward slash, simplifying this function. - # See https://github.com/fsspec/filesystem_spec/pull/1250 - return path.endswith((os.sep, os.sep + "*")) or ( - os.altsep is not None and path.endswith((os.altsep, os.altsep + "*")) - ) - - -class LocalFileOpener(io.IOBase): - def __init__( - self, path, mode, autocommit=True, fs=None, compression=None, **kwargs - ): - logger.debug("open file: %s", path) - self.path = path - self.mode = mode - self.fs = fs - self.f = None - self.autocommit = autocommit - self.compression = get_compression(path, compression) - self.blocksize = io.DEFAULT_BUFFER_SIZE - self._open() - - def _open(self): - if self.f is None or self.f.closed: - if self.autocommit or "w" not in self.mode: - self.f = open(self.path, mode=self.mode) - if self.compression: - compress = compr[self.compression] - self.f = compress(self.f, mode=self.mode) - else: - # TODO: check if path is writable? - i, name = tempfile.mkstemp() - os.close(i) # we want normal open and normal buffered file - self.temp = name - self.f = open(name, mode=self.mode) - if "w" not in self.mode: - self.size = self.f.seek(0, 2) - self.f.seek(0) - self.f.size = self.size - - def _fetch_range(self, start, end): - # probably only used by cached FS - if "r" not in self.mode: - raise ValueError - self._open() - self.f.seek(start) - return self.f.read(end - start) - - def __setstate__(self, state): - self.f = None - loc = state.pop("loc", None) - self.__dict__.update(state) - if "r" in state["mode"]: - self.f = None - self._open() - self.f.seek(loc) - - def __getstate__(self): - d = self.__dict__.copy() - d.pop("f") - if "r" in self.mode: - d["loc"] = self.f.tell() - else: - if not self.f.closed: - raise ValueError("Cannot serialise open write-mode local file") - return d - - def commit(self): - if self.autocommit: - raise RuntimeError("Can only commit if not already set to autocommit") - shutil.move(self.temp, self.path) - - def discard(self): - if self.autocommit: - raise RuntimeError("Cannot discard if set to autocommit") - os.remove(self.temp) - - def readable(self) -> bool: - return True - - def writable(self) -> bool: - return "r" not in self.mode - - def read(self, *args, **kwargs): - return self.f.read(*args, **kwargs) - - def write(self, *args, **kwargs): - return self.f.write(*args, **kwargs) - - def tell(self, *args, **kwargs): - return self.f.tell(*args, **kwargs) - - def seek(self, *args, **kwargs): - return self.f.seek(*args, **kwargs) - - def seekable(self, *args, **kwargs): - return self.f.seekable(*args, **kwargs) - - def readline(self, *args, **kwargs): - return self.f.readline(*args, **kwargs) - - def readlines(self, *args, **kwargs): - return self.f.readlines(*args, **kwargs) - - def close(self): - return self.f.close() - - @property - def closed(self): - return self.f.closed - - def fileno(self): - return self.raw.fileno() - - def flush(self) -> None: - self.f.flush() - - def __iter__(self): - return self.f.__iter__() - - def __getattr__(self, item): - return getattr(self.f, item) - - def __enter__(self): - self._incontext = True - return self - - def __exit__(self, exc_type, exc_value, traceback): - self._incontext = False - self.f.__exit__(exc_type, exc_value, traceback) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css deleted file mode 100644 index 1b457869043e5e2005c2331cb14abed07b7f6a88..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css +++ /dev/null @@ -1 +0,0 @@ -span.svelte-s1r2yt{font-weight:var(--section-header-text-weight);font-size:var(--section-header-text-size)}.label-wrap.svelte-s1r2yt{display:flex;justify-content:space-between;cursor:pointer;width:var(--size-full)}.label-wrap.open.svelte-s1r2yt{margin-bottom:var(--size-2)}.icon.svelte-s1r2yt{transition:.15s} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/commands/_cli_utils.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/commands/_cli_utils.py deleted file mode 100644 index bbf17e887e901e58461b09e6648d614bb2caabbb..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/commands/_cli_utils.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains a utility for good-looking prints.""" -import os -from typing import List, Union - - -class ANSI: - """ - Helper for en.wikipedia.org/wiki/ANSI_escape_code - """ - - _bold = "\u001b[1m" - _gray = "\u001b[90m" - _red = "\u001b[31m" - _reset = "\u001b[0m" - - @classmethod - def bold(cls, s: str) -> str: - return cls._format(s, cls._bold) - - @classmethod - def gray(cls, s: str) -> str: - return cls._format(s, cls._gray) - - @classmethod - def red(cls, s: str) -> str: - return cls._format(s, cls._bold + cls._red) - - @classmethod - def _format(cls, s: str, code: str) -> str: - if os.environ.get("NO_COLOR"): - # See https://no-color.org/ - return s - return f"{code}{s}{cls._reset}" - - -def tabulate(rows: List[List[Union[str, int]]], headers: List[str]) -> str: - """ - Inspired by: - - - stackoverflow.com/a/8356620/593036 - - stackoverflow.com/questions/9535954/printing-lists-as-tabular-data - """ - col_widths = [max(len(str(x)) for x in col) for col in zip(*rows, headers)] - row_format = ("{{:{}}} " * len(headers)).format(*col_widths) - lines = [] - lines.append(row_format.format(*headers)) - lines.append(row_format.format(*["-" * w for w in col_widths])) - for row in rows: - lines.append(row_format.format(*row)) - return "\n".join(lines) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/docstring.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/docstring.py deleted file mode 100644 index b6ddcf5acd10affa8dd8dff4b52f2ef6bb596d58..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/docstring.py +++ /dev/null @@ -1,4 +0,0 @@ -from matplotlib._docstring import * # noqa: F401, F403 -from matplotlib import _api -_api.warn_deprecated( - "3.6", obj_type='module', name=f"{__name__}") diff --git a/spaces/lafi23333/aikomori/data_utils.py b/spaces/lafi23333/aikomori/data_utils.py deleted file mode 100644 index 9dfba4a9dfbfbd2b6ed5e771a5ffee4f70419ba3..0000000000000000000000000000000000000000 --- a/spaces/lafi23333/aikomori/data_utils.py +++ /dev/null @@ -1,152 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text, transform - -# import h5py - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.spec_len = hparams.train.max_speclen - self.spk_map = hparams.spk - - random.seed(1234) - random.shuffle(self.audiopaths) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split(os.sep)[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - c = torch.load(filename + ".soft.pt").squeeze(0) - c = torch.repeat_interleave(c, repeats=2, dim=1) - - f0 = np.load(filename + ".f0.npy") - f0 = torch.FloatTensor(f0) - lmin = min(c.size(-1), spec.size(-1), f0.shape[0]) - assert abs(c.size(-1) - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape, filename) - assert abs(lmin - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - assert abs(lmin - c.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - spec, c, f0 = spec[:, :lmin], c[:, :lmin], f0[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - _spec, _c, _audio_norm, _f0 = spec, c, audio_norm, f0 - while spec.size(-1) < self.spec_len: - spec = torch.cat((spec, _spec), -1) - c = torch.cat((c, _c), -1) - f0 = torch.cat((f0, _f0), -1) - audio_norm = torch.cat((audio_norm, _audio_norm), -1) - start = random.randint(0, spec.size(-1) - self.spec_len) - end = start + self.spec_len - spec = spec[:, start:end] - c = c[:, start:end] - f0 = f0[start:end] - audio_norm = audio_norm[:, start * self.hop_length:end * self.hop_length] - - return c, f0, spec, audio_norm, spk - - def __getitem__(self, index): - return self.get_audio(self.audiopaths[index][0]) - - def __len__(self): - return len(self.audiopaths) - - -class EvalDataLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.audiopaths = self.audiopaths[:5] - self.spk_map = hparams.spk - - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split(os.sep)[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - c = torch.load(filename + ".soft.pt").squeeze(0) - - c = torch.repeat_interleave(c, repeats=2, dim=1) - - f0 = np.load(filename + ".f0.npy") - f0 = torch.FloatTensor(f0) - lmin = min(c.size(-1), spec.size(-1), f0.shape[0]) - assert abs(c.size(-1) - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - assert abs(f0.shape[0] - spec.shape[-1]) < 4, (c.size(-1), spec.size(-1), f0.shape) - spec, c, f0 = spec[:, :lmin], c[:, :lmin], f0[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - - return c, f0, spec, audio_norm, spk - - def __getitem__(self, index): - return self.get_audio(self.audiopaths[index][0]) - - def __len__(self): - return len(self.audiopaths) - diff --git a/spaces/lewisliuX123/wechatllama2/scripts/start.sh b/spaces/lewisliuX123/wechatllama2/scripts/start.sh deleted file mode 100644 index ac92f8851f6925399f2a4482e271a10ff2accbd5..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatllama2/scripts/start.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash -#后台运行Chat_on_webchat执行脚本 - -cd `dirname $0`/.. -export BASE_DIR=`pwd` -echo $BASE_DIR - -# check the nohup.out log output file -if [ ! -f "${BASE_DIR}/nohup.out" ]; then - touch "${BASE_DIR}/nohup.out" -echo "create file ${BASE_DIR}/nohup.out" -fi - -nohup python3 "${BASE_DIR}/app.py" & tail -f "${BASE_DIR}/nohup.out" - -echo "Chat_on_webchat is starting,you can check the ${BASE_DIR}/nohup.out" diff --git a/spaces/librarian-bots/SFconvertbot-PR-dashboard/app.py b/spaces/librarian-bots/SFconvertbot-PR-dashboard/app.py deleted file mode 100644 index 0732609ebcc14404387f89f168d22067dee37d7d..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/SFconvertbot-PR-dashboard/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -from datetime import datetime, timedelta -from functools import lru_cache -from typing import Any, List - -import gradio as gr -import httpx -import pandas as pd -import plotly.express as px -import polars as pl -from cachetools import TTLCache, cached -from datasets import Dataset, load_dataset -from dotenv import load_dotenv -from httpx import Client -from toolz import concat, frequencies -from tqdm.auto import tqdm - -load_dotenv() -token = os.environ["HUGGINGFACE_TOKEN"] -user_agent = os.environ["USER_AGENT"] -user = os.environ["USER_TO_TRACK"] -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" -assert token -assert user_agent -assert user - -headers = {"user-agent": user_agent, "authorization": f"Bearer {token}"} -limits = httpx.Limits(max_keepalive_connections=10, max_connections=20) -client = Client(headers=headers, http2=True, limits=limits, timeout=60.0) - - -@lru_cache(maxsize=None) -def get_hub_community_activity(user: str, max: int = 50_000) -> List[Any]: - with tqdm() as pbar: - all_data = [] - i = 1 - while i <= max: - try: - r = client.get( - f"https://huggingface.co/api/recent-activity?limit=100&type=discussion&skip={i}&user={user}", - ) - activity = r.json()["recentActivity"] - if not activity: - break - all_data.append(activity) - if len(all_data) % 1000 == 0: - # print(f"Length of all_data: {len(all_data)}") - pbar.write(f"Length of all_data: {len(all_data)}") - i += 100 - pbar.update(100) - except Exception as e: - print(e) - continue - - return list(concat(all_data)) - - -# def get_hub_community_activity(user: str) -> List[Any]: -# all_data = [] -# for i in range(1, 2000, 100): -# r = httpx.get( -# f"https://huggingface.co/api/recent-activity?limit=100&type=discussion&skip={i}&user={user}" -# ) -# activity = r.json()["recentActivity"] -# all_data.append(activity) -# return list(concat(all_data)) - - -def parse_date_time(date_time: str) -> datetime: - return datetime.strptime(date_time, "%Y-%m-%dT%H:%M:%S.%fZ") - - -def parse_pr_data(data): - data = data["discussionData"] - createdAt = parse_date_time(data["createdAt"]) - pr_number = data["num"] - status = data["status"] - repo_id = data["repo"]["name"] - repo_type = data["repo"]["type"] - isPullRequest = data["isPullRequest"] - return { - "createdAt": createdAt, - "pr_number": pr_number, - "status": status, - "repo_id": repo_id, - "type": repo_type, - "isPullRequest": isPullRequest, - } - - -@cached(cache=TTLCache(maxsize=1000, ttl=timedelta(minutes=30), timer=datetime.now)) -def update_data(): - try: - previous_df = pl.DataFrame( - load_dataset(f"librarian-bot/{user}-stats", split="train").data.table - ) - except FileNotFoundError: - previous_df = pl.DataFrame() - data = get_hub_community_activity(user) - data = [parse_pr_data(d) for d in data] - update_df = pl.DataFrame(data) - df = pl.concat([previous_df, update_df]).unique() - if len(df) != len(previous_df): - Dataset(df.to_arrow()).push_to_hub(f"{user}-stats", token=token) - return df - - -# def get_pr_status(): -# df = update_data() -# df = df.filter(pl.col("isPullRequest") is True) -# return df.select(pl.col("status").value_counts()) -# # return frequencies(x["status"] for x in pr_data) - - -@lru_cache(maxsize=512) -def get_pr_status(user: str): - all_data = get_hub_community_activity(user) - pr_data = ( - x["discussionData"] for x in all_data if x["discussionData"]["isPullRequest"] - ) - return frequencies(x["status"] for x in pr_data) - - -def create_pie(): - frequencies = get_pr_status(user) - df = pd.DataFrame({"status": frequencies.keys(), "number": frequencies.values()}) - return px.pie(df, values="number", names="status", template="seaborn") - - -def group_status_by_pr_number(): - all_data = get_hub_community_activity(user) - all_data = [parse_pr_data(d) for d in all_data] - return ( - pl.DataFrame(all_data).groupby("status").agg(pl.mean("pr_number")).to_pandas() - ) - - -def plot_over_time(): - all_data = get_hub_community_activity(user) - all_data = [parse_pr_data(d) for d in all_data] - df = pl.DataFrame(all_data).with_columns(pl.col("createdAt").cast(pl.Date)) - df = df.pivot( - values=["status"], - index=["createdAt"], - columns=["status"], - aggregate_function="count", - ) - df = df.fill_null(0) - df = df.with_columns(pl.sum(["open", "closed", "merged"])).sort("createdAt") - df = df.to_pandas().set_index("createdAt").cumsum() - return px.line(df, x=df.index, y=[c for c in df.columns if c != "sum"]) - - -create_pie() - -with gr.Blocks() as demo: - # frequencies = get_pr_status("librarian-bot") - gr.Markdown(f"# {user} PR Stats") - gr.Markdown(f"Total prs and issues opened by {user}: {len(update_data()):,}") - # gr.Markdown(f"Total PRs opened: {sum(frequencies.values())}") - with gr.Column(): - gr.Markdown("## Pull requests status") - gr.Markdown( - "The below pie chart shows the percentage of pull requests made by" - " librarian bot that are open, closed or merged" - ) - gr.Plot(create_pie()) - with gr.Column(): - gr.Markdown("Pull requests opened, closed and merged over time (cumulative)") - gr.Plot(plot_over_time()) - with gr.Column(): - gr.Markdown("## Pull requests status by PR number") - gr.DataFrame(group_status_by_pr_number()) -demo.launch(debug=True) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Anstoss 3 Originaldaten 11 12 REPACK Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Anstoss 3 Originaldaten 11 12 REPACK Download.md deleted file mode 100644 index 4071d33836bd6ea53655478598e4f93741e9d41f..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Anstoss 3 Originaldaten 11 12 REPACK Download.md +++ /dev/null @@ -1,28 +0,0 @@ -

    Anstoss 3 originaldaten 11 12 download


    Download ✶✶✶ https://bytlly.com/2uGxSz



    -
    -Wird im DL-Server nicht mehr heruntergeladen, gehort nicht mehr zum Stream. - -Neu sind die Originaldaten 11 und 12. - -Bis jetzt lief der Stream korrekt, aber der DL-Server unterbietet sich jetzt - wir haben 3 Originaldaten und nur eine DL-Server Daten. - -Im DL-Client wird nur noch ein Datensatz angezeigt, der mit 2 Originaldaten verknüpft ist. - -Was passiert jetzt? Der DL-Server und der DL-Client verweigern sich beide. - -A: - -Wenn ein Endpunkt wie ein DL-Client oder DL-Server keine Stream-Anfragen mehr abgibt, hat das dann für das Endpunkt einen Stream zu unterbrechen? - -Was passiert dann nachher? - -Mit wenigen Ausnahmen trifft dieses Vorgehen auf den Einsatz von MediaSessions in einem DL-Client oder DL-Server zu. Wenn die Eigenschaften von MediaSession nicht auf ein gegebenes Endpunkt abgefiedert werden, sind sie nach Durchlaufen des Endpunkts unverändert. Ich habe kein passendes Beispiel dafür gefunden. - -Das DL-Client und DL-Server auf einem Endpunkt unterbrechen die Stream für dieses Endpunkt. Wer einen anderen Endpunkt wählt, muss die Bibliothek von MediaSessions neu laden. - -No More Pretty Girls - -No More Pretty Girls is the debut album by American singer-songwriter Kacey Musgraves. It was released on May 7, 2013, by Mercury Nashville. Musgraves wrote or co-wrote every song on the album, most notably "Marry Me", "Up Down", "Not Me, Not There 4fefd39f24
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Autocomkeygen[CRACKED] Full.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Autocomkeygen[CRACKED] Full.md deleted file mode 100644 index 7142e184c8f373be065e9638d836bacc0146d738..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Autocomkeygen[CRACKED] Full.md +++ /dev/null @@ -1,6 +0,0 @@ -

    AutocomkeygenFull


    DOWNLOAD >>>>> https://bytlly.com/2uGxQB



    - -Ontrack EasyRecovery Enterprise 11 Keygen paragon ntfs Keygen paragon ntfs Mavis Beacon Teaches Typing Deluxe Autocom keygen Full. NHRA Drag ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Borderlands 2 [PATCHED] Crack Only Fixed-3DM And SKIDROW.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Borderlands 2 [PATCHED] Crack Only Fixed-3DM And SKIDROW.md deleted file mode 100644 index 7f7d3a89a37a8b71673df83b938bfa694aea0c8a..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Borderlands 2 [PATCHED] Crack Only Fixed-3DM And SKIDROW.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Borderlands 2 Crack Only Fixed-3DM And SKIDROW


    Download Zip ✸✸✸ https://bytlly.com/2uGx7s



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Movies In 720p Kick 1080p.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Movies In 720p Kick 1080p.md deleted file mode 100644 index d8573f971f47936fc45b6e06756f03ea10768deb..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Movies In 720p Kick 1080p.md +++ /dev/null @@ -1,12 +0,0 @@ -

    Download Movies In 720p Kick 1080p


    Download File 🗸🗸🗸 https://bytlly.com/2uGwMm



    - -Download Hindi Kick English Subtitled Mp4, HD & 3gp. Download Kick full movie new bollywood hindi movie / salman khan. 02:13:11. Download Kick 2 # 2014 # Full ... download movie Kick 2 2014 in full HD 720 or 1080 for free, without SMS and without registration! -Kik 2 2014 movie download on Android tablet phone. -Download Kick 2 (2014) torrent for free without registration. -Movie information Title: Kick 2 Original title: Kick 2 Release year: 2014 Genre: action, drama, sports Director: Tarun ... -http://creagor.com/index.php?newsid=1446359890&lang=en -You can download the movie Kick (2014) for free and without registration on our website. -Enjoy your viewing! 8a78ff9644
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Malwarebytes 3.8.3.2965-12467 Crack Activation Code.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Malwarebytes 3.8.3.2965-12467 Crack Activation Code.md deleted file mode 100644 index 1c19bb2ba5d24593292453ff7b9bdf9ca8964d9c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Malwarebytes 3.8.3.2965-12467 Crack Activation Code.md +++ /dev/null @@ -1,88 +0,0 @@ -## Malwarebytes 3.8.3.2965-12467 Crack Activation Code - - - - - - - - - -**CLICK HERE ✪✪✪ [https://fienislile.blogspot.com/?download=2tyEkq](https://fienislile.blogspot.com/?download=2tyEkq)** - - - - - - - - - - - - - -# How to Download and Activate Malwarebytes 3.8.3.2965-12467 Crack for Free - - - -If you are looking for a powerful and reliable anti-malware software that can protect your PC from various threats, you may want to try Malwarebytes 3.8.3.2965-12467 Crack. This is a cracked version of Malwarebytes Premium, which is one of the most popular and trusted anti-malware applications on the market. - - - -Malwarebytes Premium can detect and remove malware that even the most well-known antivirus and anti-malware programs cannot. It has a fast and efficient scanning engine, a real-time protection module, a quarantine feature, an ignore list, a settings menu, and a number of extra utilities to help you remove malware manually. - - - -Malwarebytes Premium also supports Windows 10, 8.1, 8, 7, Vista, and XP (32-bit and 64-bit), as well as Internet Explorer 6 or newer. It requires at least 250 MB of free hard disk space, 1024 MB of RAM (2048 MB for 64-bit OS), and an 800 MHz CPU or faster with SSE2 technology. - - - -However, Malwarebytes Premium is not a free software. You need to purchase a license key to activate it and enjoy its full features. The license key costs $39.99 per year for one device, or $59.99 per year for three devices. - - - -If you don't want to spend money on Malwarebytes Premium, you can download and activate Malwarebytes 3.8.3.2965-12467 Crack for free. This is a modified version of Malwarebytes Premium that bypasses the activation process and lets you use it without any limitations. - - - -## How to Download and Activate Malwarebytes 3.8.3.2965-12467 Crack - - - -To download and activate Malwarebytes 3.8.3.2965-12467 Crack, you need to follow these steps: - - - -1. Download the setup file of Malwarebytes Premium v3.8.3.2965 from [this link](https://www.iemblog.com/?p=3585&lang=en). This is a safe and verified source that provides the original installer of Malwarebytes Premium. - -2. Run the setup file and follow the instructions to install Malwarebytes Premium on your PC. - -3. Download the crack file of Malwarebytes 3.8.3.2965-12467 from [this link](https://www.iemblog.com/?p=3585&lang=en). This is a keygen.exe file that generates a valid activation code for Malwarebytes Premium. - -4. Run the keygen.exe file and click on the Generate button to get an activation code. - -5. Copy the activation code and paste it into the activation window of Malwarebytes Premium. - -6. Click on the Activate button to complete the activation process. - -7. Enjoy using Malwarebytes 3.8.3.2965-12467 Crack for free! - - - -## Conclusion - - - -Malwarebytes 3.8.3.2965-12467 Crack is a great way to use Malwarebytes Premium for free and protect your PC from malware threats. However, you should be aware that using cracked software may pose some risks, such as viruses, malware, legal issues, or compatibility problems. - - - -Therefore, we recommend that you use Malwarebytes 3.8.3.2965-12467 Crack at your own risk and discretion. If you want to support the developers of Malwarebytes Premium and get regular updates and technical support, you should buy a license key from their official website. - - 145887f19f - - - - - diff --git a/spaces/lingbionlp/PhenoTagger-Demo/src/tagging_text.py b/spaces/lingbionlp/PhenoTagger-Demo/src/tagging_text.py deleted file mode 100644 index 0f1cf0338579aeeb1dd8f299539084e0e09a439f..0000000000000000000000000000000000000000 --- a/spaces/lingbionlp/PhenoTagger-Demo/src/tagging_text.py +++ /dev/null @@ -1,102 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Mon Aug 24 16:21:23 2020 - -@author: luol2 -""" - -import argparse -from src.ssplit_tokenzier import ssplit_token_pos_lemma -from src.ml_ner import ml_tagging,ml_tagging_allngram -from src.combine_result import combine_ml_dict -from src.restore_index import restore_index_nest_fn -from src.dic_ner import dic_ont -from src.post_processing import combine_overlap -from src.abbre_resolution import postprocess_abbr -import os -import time -import json - -#hybrid method -def bioTag(text,biotag_dic,ml_model,onlyLongest=False, abbrRecog=False, Threshold=0.95): - - # startTime=time.time() - ssplit_token=ssplit_token_pos_lemma(text) -# print(ssplit_token) - # print('ssplit token:',time.time()-startTime) - - # startTime=time.time() - dict_tsv=biotag_dic.matching(ssplit_token) -# print('dict tsv:\n',dict_tsv) - # print('dict ner:',time.time()-startTime) - - # startTime=time.time() - ml_tsv=ml_tagging(ssplit_token,ml_model,Threshold) - #print('ml_tsv:\n',ml_tsv) - # print('ml ner:',time.time()-startTime) - - # startTime=time.time() - combine_tsv=combine_ml_dict(dict_tsv,ml_tsv) - #combine_tsv=combine_ml_dict_fn(ml_tsv,dict_tsv) - #print('combine:\n',combine_tsv) - # print('combine:',time.time()-startTime) - - # startTime=time.time() - final_result= restore_index_nest_fn(text,combine_tsv) -# print('final ner:',time.time()-startTime) - if onlyLongest==True: - final_result=combine_overlap(final_result) - if abbrRecog==True: - final_result=postprocess_abbr(final_result,text) -# print('final result:') -# print(final_result) - # print('final ner:',time.time()-startTime) - - return final_result - -# only machine learning-based method -def bioTag_ml(text,ml_model,onlyLongest=False,abbrRecog=False, Threshold=0.95): - -# startTime=time.time() - ssplit_token=ssplit_token_pos_lemma(text) -# print(ssplit_token) -# print('ssplit token:',time.time()-startTime) - -# startTime=time.time() - ml_tsv=ml_tagging_allngram(ssplit_token,ml_model,Threshold) -# print('ml_tsv:\n',ml_tsv) -# print('ml ner:',time.time()-startTime) - - final_result= restore_index_nest_fn(text,ml_tsv) -# print('final ner:',time.time()-startTime) - if onlyLongest==True: - final_result=combine_overlap(final_result) - - if abbrRecog==True: - final_result=postprocess_abbr(final_result,text) - - return final_result - -# only dict method -def bioTag_dic(text,biotag_dic,onlyLongest=False, abbrRecog=False): - -# startTime=time.time() - ssplit_token=ssplit_token_pos_lemma(text) -# print(ssplit_token) -# print('ssplit token:',time.time()-startTime) - -# startTime=time.time() - dict_tsv=biotag_dic.matching(ssplit_token) -# print('dict tsv:\n',dict_tsv) -# print('dict ner:',time.time()-startTime) - - final_result= restore_index_nest_fn(text,dict_tsv) -# print('final ner:',time.time()-startTime) - if onlyLongest==True: - final_result=combine_overlap(final_result) - - if abbrRecog==True: - final_result=postprocess_abbr(final_result,text) - - return final_result - diff --git a/spaces/lixq/bingo61/next.config.js b/spaces/lixq/bingo61/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/ludusc/latent-space-theories/test_disentanglement.sh b/spaces/ludusc/latent-space-theories/test_disentanglement.sh deleted file mode 100644 index 40e20ce1cdce315a50b3eab17c51661315e1ea17..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/test_disentanglement.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -#SBATCH --time=02:00:00 -#SBATCH --mem=32GB -#SBATCH --gres gpu:1 - -module load v100 -module load cuda -module load mamba -source activate test - -python DisentanglementBase.py -conda deactivate diff --git a/spaces/ludusc/latent-space-theories/torch_utils/ops/grid_sample_gradfix.py b/spaces/ludusc/latent-space-theories/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index 017f03ac127be121f3349ee35e7714104be04a42..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import torch -from pkg_resources import parse_version - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. -_use_pytorch_1_11_api = parse_version(torch.__version__) >= parse_version('1.11.0a') # Allow prerelease builds of 1.11 -_use_pytorch_1_12_api = parse_version(torch.__version__) >= parse_version('1.12.0a') # Allow prerelease builds of 1.12 - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - return enabled - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - if _use_pytorch_1_12_api: - op = op[0] - if _use_pytorch_1_11_api: - output_mask = (ctx.needs_input_grad[1], ctx.needs_input_grad[2]) - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False, output_mask) - else: - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clogf.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clogf.h deleted file mode 100644 index 7f3314ed2635c28ff5627235525da9c1fa8709ad..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/complex/clogf.h +++ /dev/null @@ -1,198 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2012 Stephen Montgomery-Smith - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ - -/* adapted from FreeBSDs msun:*/ - -#pragma once - -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -/* round down to 8 = 24/3 bits */ -__host__ __device__ inline -float trim(float x){ - uint32_t hx; - get_float_word(hx, x); - hx &= 0xffff0000; - float ret; - set_float_word(ret,hx); - return ret; -} - - -__host__ __device__ inline -complex clogf(const complex& z){ - - // Adapted from FreeBSDs msun - float x, y; - float ax, ay; - float x0, y0, x1, y1, x2, y2, t, hm1; - float val[12]; - int i, sorted; - const float e = 2.7182818284590452354f; - - x = z.real(); - y = z.imag(); - - /* Handle NaNs using the general formula to mix them right. */ - if (x != x || y != y){ - return (complex(std::log(norm(z)), std::atan2(y, x))); - } - - ax = std::abs(x); - ay = std::abs(y); - if (ax < ay) { - t = ax; - ax = ay; - ay = t; - } - - /* - * To avoid unnecessary overflow, if x and y are very large, divide x - * and y by M_E, and then add 1 to the logarithm. This depends on - * M_E being larger than sqrt(2). - * There is a potential loss of accuracy caused by dividing by M_E, - * but this case should happen extremely rarely. - */ - // For high values of ay -> hypotf(FLT_MAX,ay) = inf - // We expect that for values at or below ay = 1e34f this should not happen - if (ay > 1e34f){ - return (complex(std::log(hypotf(x / e, y / e)) + 1.0f, std::atan2(y, x))); - } - if (ax == 1.f) { - if (ay < 1e-19f){ - return (complex((ay * 0.5f) * ay, std::atan2(y, x))); - } - return (complex(log1pf(ay * ay) * 0.5f, std::atan2(y, x))); - } - - /* - * Because atan2 and hypot conform to C99, this also covers all the - * edge cases when x or y are 0 or infinite. - */ - if (ax < 1e-6f || ay < 1e-6f || ax > 1e6f || ay > 1e6f){ - return (complex(std::log(hypotf(x, y)), std::atan2(y, x))); - } - - /* - * From this point on, we don't need to worry about underflow or - * overflow in calculating ax*ax or ay*ay. - */ - - /* Some easy cases. */ - - if (ax >= 1.0f){ - return (complex(log1pf((ax-1.f)*(ax+1.f) + ay*ay) * 0.5f, atan2(y, x))); - } - - if (ax*ax + ay*ay <= 0.7f){ - return (complex(std::log(ax*ax + ay*ay) * 0.5f, std::atan2(y, x))); - } - - /* - * Take extra care so that ULP of real part is small if hypot(x,y) is - * moderately close to 1. - */ - - - x0 = trim(ax); - ax = ax-x0; - x1 = trim(ax); - x2 = ax-x1; - y0 = trim(ay); - ay = ay-y0; - y1 = trim(ay); - y2 = ay-y1; - - val[0] = x0*x0; - val[1] = y0*y0; - val[2] = 2*x0*x1; - val[3] = 2*y0*y1; - val[4] = x1*x1; - val[5] = y1*y1; - val[6] = 2*x0*x2; - val[7] = 2*y0*y2; - val[8] = 2*x1*x2; - val[9] = 2*y1*y2; - val[10] = x2*x2; - val[11] = y2*y2; - - /* Bubble sort. */ - - do { - sorted = 1; - for (i=0;i<11;i++) { - if (val[i] < val[i+1]) { - sorted = 0; - t = val[i]; - val[i] = val[i+1]; - val[i+1] = t; - } - } - } while (!sorted); - - hm1 = -1; - for (i=0;i<12;i++){ - hm1 += val[i]; - } - return (complex(0.5f * log1pf(hm1), atan2(y, x))); -} - -} // namespace complex - -} // namespace detail - -template <> -__host__ __device__ -inline complex log(const complex& z){ - return detail::complex::clogf(z); -} - -} // namespace thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/static_map.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/static_map.h deleted file mode 100644 index 872a73aefd347d65519663bdcb8105ee83f86baf..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/static_map.h +++ /dev/null @@ -1,170 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - - -#include - - -namespace thrust -{ -namespace detail -{ -namespace static_map_detail -{ - - -template -struct key_value -{ - static const unsigned int key = k; - static const unsigned int value = v; -}; - - -template -struct cons -{ - template - struct static_get - { - static const unsigned int value = (key == Head::key) ? (Head::value) : Tail::template static_get::value; - }; - - - template - __host__ __device__ - static unsigned int get(unsigned int key) - { - return (key == Head::key) ? (Head::value) : Tail::template get(key); - } -}; - - -template -struct cons -{ - template - struct static_get - { - static const unsigned int value = (key == Head::key) ? (Head::value) : default_value; - }; - - template - __host__ __device__ - static unsigned int get(unsigned int key) - { - return (key == Head::key) ? (Head::value) : default_value; - } -}; - - -template -struct static_map -{ - typedef cons< - key_value, - cons< - key_value, - cons< - key_value, - cons< - key_value, - cons< - key_value, - cons< - key_value, - cons< - key_value, - cons< - key_value - > - > - > - > - > - > - > - > impl; - - template - struct static_get - { - static const unsigned int value = impl::template static_get::value; - }; - - __host__ __device__ - static unsigned int get(unsigned int key) - { - return impl::template get(key); - } -}; - - -} // end namespace static_map_detail - - -template -struct static_map - : static_map_detail::static_map< - default_value, - key0, value0, - key1, value1, - key2, value2, - key3, value3, - key4, value4, - key5, value5, - key6, value6, - key7, value7 - > -{}; - - -template -struct static_lookup -{ - static const unsigned int value = StaticMap::template static_get::value; -}; - - -template -__host__ __device__ -unsigned int lookup(unsigned int key) -{ - return StaticMap::get(key); -} - - -} // end namespace detail -} // end namespace thrust - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/metrics/README_CN.md b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/metrics/README_CN.md deleted file mode 100644 index 98d00308ab79e92a2393f9759190de8122a8e79d..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/metrics/README_CN.md +++ /dev/null @@ -1,48 +0,0 @@ -# Metrics - -[English](README.md) **|** [简体中文](README_CN.md) - -- [约定](#约定) -- [PSNR 和 SSIM](#psnr-和-ssim) - -## 约定 - -因为不同的输入类型会导致结果的不同,因此我们对输入做如下约定: - -- Numpy 类型 (一般是 cv2 的结果) - - UINT8: BGR, [0, 255], (h, w, c) - - float: BGR, [0, 1], (h, w, c). 一般作为中间结果 -- Tensor 类型 - - float: RGB, [0, 1], (n, c, h, w) - -其他约定: - -- 以 `_pt` 结尾的是 PyTorch 结果 -- PyTorch version 支持 batch 计算 -- 颜色转换在 float32 上做;metric计算在 float64 上做 - -## PSNR 和 SSIM - -PSNR 和 SSIM 的结果趋势是一致的,即一般 PSNR 高,则 SSIM 也高。 -在实现上, PSNR 的各种实现都很一致。SSIM 有各种各样的实现,我们这里和 MATLAB 最原始版本保持 (参考 [NTIRE17比赛](https://competitions.codalab.org/competitions/16306#participate) 的 [evaluation代码](https://competitions.codalab.org/my/datasets/download/ebe960d8-0ec8-4846-a1a2-7c4a586a7378)) - -下面列了各个实现的结果比对. -总结:PyTorch 实现和 MATLAB 实现基本一致,在 GPU 运行上会有稍许差异 - -- PSNR 比对 - -|Image | Color Space | MATLAB | Numpy | PyTorch CPU | PyTorch GPU | -|:---| :---: | :---: | :---: | :---: | :---: | -|baboon| RGB | 20.419710 | 20.419710 | 20.419710 |20.419710 | -|baboon| Y | - |22.441898 | 22.441899 | 22.444916| -|comic | RGB | 20.239912 | 20.239912 | 20.239912 | 20.239912 | -|comic | Y | - | 21.720398 | 21.720398 | 21.721663| - -- SSIM 比对 - -|Image | Color Space | MATLAB | Numpy | PyTorch CPU | PyTorch GPU | -|:---| :---: | :---: | :---: | :---: | :---: | -|baboon| RGB | 0.391853 | 0.391853 | 0.391853|0.391853 | -|baboon| Y | - |0.453097| 0.453097 | 0.453171| -|comic | RGB | 0.567738 | 0.567738 | 0.567738 | 0.567738| -|comic | Y | - | 0.585511 | 0.585511 | 0.585522 | diff --git a/spaces/marusia/img_styler/README.md b/spaces/marusia/img_styler/README.md deleted file mode 100644 index ae6d6b700f6b12ee2617b627fef4eea44550f849..0000000000000000000000000000000000000000 --- a/spaces/marusia/img_styler/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Img Styler -emoji: 🦀 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mateuseap/magic-vocals/lib/infer_pack/modules.py b/spaces/mateuseap/magic-vocals/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/mateuseap/magic-vocals/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/mattritchey/QuickAddresses/app.py b/spaces/mattritchey/QuickAddresses/app.py deleted file mode 100644 index b8a32234a9e3172e75b2a9c714fe8649d02c28e0..0000000000000000000000000000000000000000 --- a/spaces/mattritchey/QuickAddresses/app.py +++ /dev/null @@ -1,165 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Fri Nov 11 07:26:42 2022 - -@author: mritchey -""" -# streamlit run "C:\Users\mritchey\.spyder-py3\Python Scripts\streamlit projects\quick address\quick_address.py" -import streamlit as st -from streamlit_folium import st_folium -import pandas as pd -import numpy as np -import folium -from joblib import Parallel, delayed - - -@st.cache -def convert_df(df): - return df.to_csv(index=0).encode('utf-8') - - -def map_results(results): - for index, row in results.iterrows(): - address, sq_ft = results.loc[index, - 'Address'], results.loc[index, 'Total Area'] - html = f"""

    - {address} -
    Square Footage: {sq_ft}""" - - iframe = folium.IFrame(html) - popup = folium.Popup(iframe, - min_width=140, - max_width=140) - - folium.Marker(location=[results.loc[index, 'Lat'], - results.loc[index, 'Lon']], - fill_color='#43d9de', - popup=popup, - radius=8).add_to(m) - return folium - - -# @st.cache -def get_housing_data(address_input): - address = address_input.replace( - ' ', '+').replace(',', '').replace('#+', '').upper() - try: - census = pd.read_json( - f"https://geocoding.geo.census.gov/geocoder/geographies/onelineaddress?address={address}&benchmark=2020&vintage=2020&format=json") - results = census.iloc[:1, 0][0] - matchedAddress_first = results[0]['matchedAddress'] - matchedAddress_last = results[-1]['matchedAddress'] - lat, lon = results[0]['coordinates']['y'], results[0]['coordinates']['x'] - # lat2, lon2 = results[-1]['coordinates']['y'], results[-1]['coordinates']['x'] - censusb = pd.DataFrame({'Description': ['Address Input', 'Census Matched Address: First', - 'Census Matched Address: Last', 'Lat', 'Lon'], - 'Values': [address_input, matchedAddress_first, matchedAddress_last, lat, lon]}) - - #Property Records - url = f'https://www.countyoffice.org/property-records-search/?q={address}' - county_office_list = pd.read_html(url) - - if county_office_list[1].shape[1] == 2: - df2 = pd.concat([county_office_list[0], county_office_list[1]]) - else: - df2 = county_office_list[0] - df2.columns = ['Description', 'Values'] - - final = censusb.append(df2) - - #Transpose - final2 = final.T - final2.columns = final2.loc['Description'] - final2 = final2.loc[['Values']].set_index('Address Input') - # final2['County Office Url']=url - except: - final2 = address_input - return final2 - - -# @st.cache(allow_output_mutation=True) -def address_quick(df, n_jobs=24): - if isinstance(df, pd.DataFrame): - df = df.drop_duplicates() - df['address_input'] = df.iloc[:, 0]+', '+df.iloc[:, 1] + \ - ', '+df.iloc[:, 2]+' '+df.iloc[:, 3].astype(str).str[:5] - df['address'] = df['address_input'].replace( - {' ': '+', ',': ''}, regex=True).str.upper() - df['address'] = df['address'].replace({'#+': ''}, regex=True) - # addresses=df['address'].values - addresses_input = df['address_input'].values - else: - addresses_input = [df] - results = Parallel(n_jobs=n_jobs, prefer="threads")( - delayed(get_housing_data)(i) for i in addresses_input) - results_df = [i for i in results if isinstance(i, pd.DataFrame)] - results_errors = [i for i in results if not isinstance(i, pd.DataFrame)] - errors = pd.DataFrame({'Error Addresses': results_errors}) - final_results = pd.concat(results_df) - final_results = final_results[final_results.columns[2:]].copy() - - return final_results, errors - - -st.set_page_config(layout="wide") -col1, col2 = st.columns((2)) - -address = st.sidebar.text_input( - "Address", "1500 MOHICAN DR, FORESTDALE, AL, 35214") -uploaded_file = st.sidebar.file_uploader("Choose a file") -uploaded_file = 'C:/Users/mritchey/addresses_sample.csv' -address_file = st.sidebar.radio('Choose', - ('Single Address', 'Addresses (Geocode: Will take a bit)')) - - -if address_file == 'Addresses (Geocode: Will take a bit)': - try: - df = pd.read_csv(uploaded_file) - cols = df.columns.to_list()[:4] - with st.spinner("Getting Data: Hang On..."): - results, errors = address_quick(df[cols]) - - except: - st.header('Make Sure File is Loaded First and then hit "Addresses"') - -else: - results, errors = address_quick(address) - -m = folium.Map(location=[39.50, -98.35], zoom_start=3) - - -with col1: - st.title('Addresses') - map_results(results) - st_folium(m, height=500, width=500) - -with col2: - st.title('Results') - results.index = np.arange(1, len(results) + 1) - st.dataframe(results) - csv = convert_df(results) - st.download_button( - label="Download data as CSV", - data=csv, - file_name='Results.csv', - mime='text/csv') - try: - if errors.shape[0] > 0: - - st.header('Errors') - errors.index = np.arange(1, len(errors) + 1) - st.dataframe(errors) - # st.table(errors.assign(hack='').set_index('hack')) - csv2 = convert_df(errors) - st.download_button( - label="Download Errors as CSV", - data=csv2, - file_name='Errors.csv', - mime='text/csv') - except: - pass - -st.markdown(""" """, unsafe_allow_html=True) diff --git a/spaces/merve/dataset-worldviews/index.html b/spaces/merve/dataset-worldviews/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -

    -

    Welcome to your static Space!

    -

    - You can modify this app directly by editing index.html in the - Files and versions tab. -

    -

    - Also don't forget to check the - Spaces documentation. -

    -
    - - diff --git a/spaces/merve/fill-in-the-blank/public/uncertainty-calibration/util.js b/spaces/merve/fill-in-the-blank/public/uncertainty-calibration/util.js deleted file mode 100644 index a0ce5b12a2a642f1186cc4004e90b046a89611f8..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/uncertainty-calibration/util.js +++ /dev/null @@ -1,38 +0,0 @@ -window.initUtil = function(){ - function addAxisLabel(c, xText, yText, xOffset=40, yOffset=-40){ - c.svg.select('.x').append('g') - .translate([c.width/2, xOffset]) - .append('text.axis-label') - .text(xText) - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'}) - - c.svg.select('.y') - .append('g') - .translate([yOffset, c.height/2]) - .append('text.axis-label') - .text(yText) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'}) - } - - function ggPlotBg(c, isBlack=true){ - if (isBlack){ - c.svg.append('rect.bg-rect') - .at({width: c.width, height: c.height, fill: '#eee'}) - .lower() - } - - c.svg.selectAll('.tick').selectAll('line').remove() - c.svg.selectAll('.y .tick') - .append('path').at({d: 'M 0 0 H ' + c.width, stroke: '#fff', strokeWidth: 1}) - c.svg.selectAll('.y text').at({x: -3}) - c.svg.selectAll('.x .tick') - .append('path').at({d: 'M 0 0 V -' + c.height, stroke: '#fff', strokeWidth: 1}) - } - - - return {addAxisLabel, ggPlotBg} -} - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js b/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js deleted file mode 100644 index 8ab520922aa2b8cb8086ca86f5119fc0b46ac433..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js +++ /dev/null @@ -1,83 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -!(function(){ - function watchFile(path){ - var lastStr = '' - - console.log(path) - function check(){ - d3.text(path + '?' + Math.random(), (err, nextStr) => { - if (err){ - console.log(err) - return check() - } - - if (nextStr == lastStr) return - lastStr = nextStr - - if (path.includes('.js')){ - console.log('js', new Date()) - Function(nextStr.replace('\n', ';').replace('\n', ';'))() - } - - if (path.includes('.css')){ - console.log('css', new Date()) - - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .filter((d, i) => i == 0) - .forEach(d => d.href = path + '?' + Math.random()) - } - }) - - if (python_settings.isDev) setTimeout(check, 100) - } - check() - } - - ;[ - 'list.css', - 'style.css', - '../two-sentences/init-scatter.js', - '../two-sentences/init-util.js', - '../two-sentences/init-pair.js', - 'init.js' - ].forEach(filename => { - var root = document.currentScript.src.replace('watch-files.js', '').split('?')[0] - var path = root + filename - - if (python_settings.isDev){ - watchFile(path) - } else { - if (path.includes('.js')){ - var node = document.createElement('script') - node.setAttribute('src', path) - document.body.appendChild(node) - } - - if (path.includes('.css')){ - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .filter((d, i) => i == 0) - .forEach(d => d.href = path + '?' + Math.random()) - } - } - }) -})() - - - diff --git a/spaces/merve/fill-in-the-blank/source/anonymization/style-graph-scroll.css b/spaces/merve/fill-in-the-blank/source/anonymization/style-graph-scroll.css deleted file mode 100644 index 7680e8c43222b6993d2bedfe43a682236680541e..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/anonymization/style-graph-scroll.css +++ /dev/null @@ -1,160 +0,0 @@ -/** { border: 1px solid #f00; }*/ - - -#container{ - position: relative; - width: auto; - margin-left: -25px; - /*margin-bottom: 100px;*/ -} - -#sections{ - width: 330px; - pointer-events: none; -} - -#sections > div{ - background: white; - opacity: .2; - margin-bottom: 400px; - line-height: 1.4em; - transition: opacity .2s; - pointer-events: all; -} -#sections > div:last-child{ - height: 480px; - margin-bottom: 0px; -} -#sections > div.graph-scroll-active{ - opacity: 1; -} - -#graph{ - margin-left: 40px; - width: 500px; - position: -webkit-sticky; - position: sticky; - top: 0px; - float: right; - height: 580px; -} - -.slider-outer { - display: block; - max-width: 300px; -} - -@media (max-width: 925px) { - #container{ - margin-left: 0px; - } - - #graph{ - width: 100%; - float: none; - max-width: 500px; - margin: 0px auto; - } - - #graph > div{ - position: relative; - left:12px; - } - - #sections{ - width: auto; - position: relative; - margin: 0px auto; - } - - #sections > div{ - background: rgba(255,255,255,.8); - padding: 10px; - border-top: 1px solid; - border-bottom: 1px solid; - margin-bottom: 80vh; - width: calc(100vw - 20px); - margin-left: -5px; - } - - #sections > div > *{ - max-width: 750px; - } - - #sections > div:first-child{ - opacity: 1; - margin-top: -260px; - } - - #sections > div:last-child{ - height: auto; - } - - #sections h3{ - margin-top: .5em; - } - - /* Adjust buttons for mobile. */ - - .button-container{ - text-align: center; - left:0px; - } - - /* Adjust sliders for mobile. */ - input[type="range" i] { - width: 280px; - } - .slider-label-container{ - width: 145px; - /* display: inline-block; */ - } - - .slide-container-heads-prob, .slide-container-population { - text-align: center; - } - - .slider-container { - margin-bottom: 5px; - text-align: center; - width: 300px; - /* display:inline-block; */ - } - - .slider-outer { - text-align: center; - display: flex; - max-width: 300px; - } - - .headsProb, .population { - margin-left: 15px; - } - - .slide-container-population { - margin-bottom: -10px; - } - - .pointer div { - left: 10px; - top: 37px; - } - - /* Adjust post summary test for mobile. */ - .post-summary{ - margin-left: 8px; - margin-bottom: 60px; - margin-top: 40px; - } - -} - -#graph > div{ - margin: 20 35px; -} - - -#end{ - height: 15vh; -} - diff --git a/spaces/merve/measuring-fairness/public/private-and-fair/top-bot-digits.js b/spaces/merve/measuring-fairness/public/private-and-fair/top-bot-digits.js deleted file mode 100644 index bc2f85ec8cb3b5544245f159aa62ff2fbffbcbb5..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/private-and-fair/top-bot-digits.js +++ /dev/null @@ -1,66 +0,0 @@ - -!(async function(){ - await util.getFile(`cns-cache/mnist_train_raw_3.npy`) - var digitMetadata = await util.getFile('mnist_train.csv') - var {byLabel} = util.decorateDigitMetadata(digitMetadata) - - var sel = d3.select('.top-bot-digits').html('') - .at({role: 'graphics-document', 'aria-label': `The twenty-five MNIST 3 digits most and least senstive to higher and lower privacy. The digits most sensitive to higher privacy are much more poorly drawn than the onces least sensitive to higher privacy.`}) - - var digitSel = sel.append('div') - var buttonSel = sel.append('div.digit-button-container') - .appendMany('div.button', d3.range(10)) - .text(d => d) - .on('click', d => drawClass(byLabel[d])) - - drawClass(byLabel[3]) - - async function drawClass(digitClass){ - buttonSel.classed('active', d => d == digitClass.key) - await util.getFile(`cns-cache/mnist_train_raw_${digitClass.key}.npy`) - - var nRows = 5 - var nCols = 5 - - var bot = _.sortBy(digitClass, d => +d.priv_order).slice(0, nRows*nCols) - var top = _.sortBy(digitClass, d => -d.priv_order).slice(0, nRows*nCols) - - digitSel.html('').append('div') - .st({maxWidth: 640, margin: '0 auto'}) - .appendMany('div', [bot, top]) - .st({display: 'inline-block'}) - .each(drawDigitBlock) - - - function drawDigitBlock(digits, isBot){ - var s = 2 - - var sel = d3.select(this).append('div') - - var c = d3.conventions({ - sel, - width: s*29*nCols, - height: s*29*nRows, - layers: 'cs', - margin: {top: 30, bottom: 10, right: 10, left: 10} - }) - - var ctx = c.layers[0] - - digits.forEach((d, i) => { - util.drawDigit( - ctx, - +d.i, - s, - (i % nCols)*s*29, - Math.floor(i/nCols)*s*29 - ) - }) - - c.svg.append('text') - .text(isBot ? 'Least sensitive to higher privacy' : 'Most sensitive to higher privacy') - .at({dy: '-.4em', textAnchor: 'middle', x: c.width/2, fontWeight: 600, fontSize: 14}) - } - } - -})() \ No newline at end of file diff --git a/spaces/miku-hutao/vits-uma-genshin-honkai/models.py b/spaces/miku-hutao/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/miku-hutao/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/ambient.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/ambient.d.ts deleted file mode 100644 index 97e1793c771841b0a75647ccc7150f42feb43a2d..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/ambient.d.ts +++ /dev/null @@ -1,318 +0,0 @@ - -// this file is generated — do not edit it - - -/// - -/** - * Environment variables [loaded by Vite](https://vitejs.dev/guide/env-and-mode.html#env-files) from `.env` files and `process.env`. Like [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), this module cannot be imported into client-side code. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://kit.svelte.dev/docs/configuration#env) (if configured). - * - * _Unlike_ [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), the values exported from this module are statically injected into your bundle at build time, enabling optimisations like dead code elimination. - * - * ```ts - * import { API_KEY } from '$env/static/private'; - * ``` - * - * Note that all environment variables referenced in your code should be declared (for example in an `.env` file), even if they don't have a value until the app is deployed: - * - * ``` - * MY_FEATURE_FLAG="" - * ``` - * - * You can override `.env` values from the command line like so: - * - * ```bash - * MY_FEATURE_FLAG="enabled" npm run dev - * ``` - */ -declare module '$env/static/private' { - export const MONGODB_URL: string; - export const MONGODB_DB_NAME: string; - export const MONGODB_DIRECT_CONNECTION: string; - export const COOKIE_NAME: string; - export const HF_ACCESS_TOKEN: string; - export const HF_API_ROOT: string; - export const SERPER_API_KEY: string; - export const SERPAPI_KEY: string; - export const OPENID_CLIENT_ID: string; - export const OPENID_CLIENT_SECRET: string; - export const OPENID_SCOPES: string; - export const OPENID_PROVIDER_URL: string; - export const USE_CLIENT_CERTIFICATE: string; - export const CERT_PATH: string; - export const KEY_PATH: string; - export const CA_PATH: string; - export const CLIENT_KEY_PASSWORD: string; - export const REJECT_UNAUTHORIZED: string; - export const MODELS: string; - export const OLD_MODELS: string; - export const PARQUET_EXPORT_DATASET: string; - export const PARQUET_EXPORT_HF_TOKEN: string; - export const PARQUET_EXPORT_SECRET: string; - export const RATE_LIMIT: string; - export const MESSAGES_BEFORE_LOGIN: string; - export const ACSetupSvcPort: string; - export const ACSvcPort: string; - export const ALLUSERSPROFILE: string; - export const APPDATA: string; - export const CHROME_CRASHPAD_PIPE_NAME: string; - export const COLOR: string; - export const COLORTERM: string; - export const CommonProgramFiles: string; - export const CommonProgramW6432: string; - export const COMPUTERNAME: string; - export const ComSpec: string; - export const DriverData: string; - export const EDITOR: string; - export const EFC_38340: string; - export const EnableLog: string; - export const GIT_ASKPASS: string; - export const HOME: string; - export const HOMEDRIVE: string; - export const HOMEPATH: string; - export const INIT_CWD: string; - export const LANG: string; - export const LOCALAPPDATA: string; - export const LOGONSERVER: string; - export const NODE: string; - export const NODE_ENV: string; - export const NODE_EXE: string; - export const NPM_CLI_JS: string; - export const npm_command: string; - export const npm_config_cache: string; - export const npm_config_engine_strict: string; - export const npm_config_globalconfig: string; - export const npm_config_global_prefix: string; - export const npm_config_init_module: string; - export const npm_config_local_prefix: string; - export const npm_config_metrics_registry: string; - export const npm_config_node_gyp: string; - export const npm_config_noproxy: string; - export const npm_config_prefix: string; - export const npm_config_userconfig: string; - export const npm_config_user_agent: string; - export const npm_execpath: string; - export const npm_lifecycle_event: string; - export const npm_lifecycle_script: string; - export const npm_node_execpath: string; - export const npm_package_json: string; - export const npm_package_name: string; - export const npm_package_version: string; - export const NPM_PREFIX_NPM_CLI_JS: string; - export const NUMBER_OF_PROCESSORS: string; - export const OculusBase: string; - export const OneDrive: string; - export const OneDriveConsumer: string; - export const ORIGINAL_XDG_CURRENT_DESKTOP: string; - export const OS: string; - export const Path: string; - export const PATHEXT: string; - export const PROCESSOR_ARCHITECTURE: string; - export const PROCESSOR_IDENTIFIER: string; - export const PROCESSOR_LEVEL: string; - export const PROCESSOR_REVISION: string; - export const ProgramData: string; - export const ProgramFiles: string; - export const ProgramW6432: string; - export const PROMPT: string; - export const PSModulePath: string; - export const PUBLIC: string; - export const RlsSvcPort: string; - export const SESSIONNAME: string; - export const SystemDrive: string; - export const SystemRoot: string; - export const TEMP: string; - export const TERM_PROGRAM: string; - export const TERM_PROGRAM_VERSION: string; - export const TMP: string; - export const USERDOMAIN: string; - export const USERDOMAIN_ROAMINGPROFILE: string; - export const USERNAME: string; - export const USERPROFILE: string; - export const VSCODE_GIT_ASKPASS_EXTRA_ARGS: string; - export const VSCODE_GIT_ASKPASS_MAIN: string; - export const VSCODE_GIT_ASKPASS_NODE: string; - export const VSCODE_GIT_IPC_HANDLE: string; - export const VSCODE_INJECTION: string; - export const VSCODE_NONCE: string; - export const windir: string; -} - -/** - * Similar to [`$env/static/private`](https://kit.svelte.dev/docs/modules#$env-static-private), except that it only includes environment variables that begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code. - * - * Values are replaced statically at build time. - * - * ```ts - * import { PUBLIC_BASE_URL } from '$env/static/public'; - * ``` - */ -declare module '$env/static/public' { - export const PUBLIC_ORIGIN: string; - export const PUBLIC_SHARE_PREFIX: string; - export const PUBLIC_GOOGLE_ANALYTICS_ID: string; - export const PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID: string; - export const PUBLIC_ANNOUNCEMENT_BANNERS: string; - export const PUBLIC_APP_NAME: string; - export const PUBLIC_APP_ASSETS: string; - export const PUBLIC_APP_COLOR: string; - export const PUBLIC_APP_DATA_SHARING: string; - export const PUBLIC_APP_DISCLAIMER: string; - export const PUBLIC_VERSION: string; -} - -/** - * This module provides access to runtime environment variables, as defined by the platform you're running on. For example if you're using [`adapter-node`](https://github.com/sveltejs/kit/tree/master/packages/adapter-node) (or running [`vite preview`](https://kit.svelte.dev/docs/cli)), this is equivalent to `process.env`. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://kit.svelte.dev/docs/configuration#env) (if configured). - * - * This module cannot be imported into client-side code. - * - * ```ts - * import { env } from '$env/dynamic/private'; - * console.log(env.DEPLOYMENT_SPECIFIC_VARIABLE); - * ``` - * - * > In `dev`, `$env/dynamic` always includes environment variables from `.env`. In `prod`, this behavior will depend on your adapter. - */ -declare module '$env/dynamic/private' { - export const env: { - MONGODB_URL: string; - MONGODB_DB_NAME: string; - MONGODB_DIRECT_CONNECTION: string; - COOKIE_NAME: string; - HF_ACCESS_TOKEN: string; - HF_API_ROOT: string; - SERPER_API_KEY: string; - SERPAPI_KEY: string; - OPENID_CLIENT_ID: string; - OPENID_CLIENT_SECRET: string; - OPENID_SCOPES: string; - OPENID_PROVIDER_URL: string; - USE_CLIENT_CERTIFICATE: string; - CERT_PATH: string; - KEY_PATH: string; - CA_PATH: string; - CLIENT_KEY_PASSWORD: string; - REJECT_UNAUTHORIZED: string; - MODELS: string; - OLD_MODELS: string; - PARQUET_EXPORT_DATASET: string; - PARQUET_EXPORT_HF_TOKEN: string; - PARQUET_EXPORT_SECRET: string; - RATE_LIMIT: string; - MESSAGES_BEFORE_LOGIN: string; - ACSetupSvcPort: string; - ACSvcPort: string; - ALLUSERSPROFILE: string; - APPDATA: string; - CHROME_CRASHPAD_PIPE_NAME: string; - COLOR: string; - COLORTERM: string; - CommonProgramFiles: string; - CommonProgramW6432: string; - COMPUTERNAME: string; - ComSpec: string; - DriverData: string; - EDITOR: string; - EFC_38340: string; - EnableLog: string; - GIT_ASKPASS: string; - HOME: string; - HOMEDRIVE: string; - HOMEPATH: string; - INIT_CWD: string; - LANG: string; - LOCALAPPDATA: string; - LOGONSERVER: string; - NODE: string; - NODE_ENV: string; - NODE_EXE: string; - NPM_CLI_JS: string; - npm_command: string; - npm_config_cache: string; - npm_config_engine_strict: string; - npm_config_globalconfig: string; - npm_config_global_prefix: string; - npm_config_init_module: string; - npm_config_local_prefix: string; - npm_config_metrics_registry: string; - npm_config_node_gyp: string; - npm_config_noproxy: string; - npm_config_prefix: string; - npm_config_userconfig: string; - npm_config_user_agent: string; - npm_execpath: string; - npm_lifecycle_event: string; - npm_lifecycle_script: string; - npm_node_execpath: string; - npm_package_json: string; - npm_package_name: string; - npm_package_version: string; - NPM_PREFIX_NPM_CLI_JS: string; - NUMBER_OF_PROCESSORS: string; - OculusBase: string; - OneDrive: string; - OneDriveConsumer: string; - ORIGINAL_XDG_CURRENT_DESKTOP: string; - OS: string; - Path: string; - PATHEXT: string; - PROCESSOR_ARCHITECTURE: string; - PROCESSOR_IDENTIFIER: string; - PROCESSOR_LEVEL: string; - PROCESSOR_REVISION: string; - ProgramData: string; - ProgramFiles: string; - ProgramW6432: string; - PROMPT: string; - PSModulePath: string; - PUBLIC: string; - RlsSvcPort: string; - SESSIONNAME: string; - SystemDrive: string; - SystemRoot: string; - TEMP: string; - TERM_PROGRAM: string; - TERM_PROGRAM_VERSION: string; - TMP: string; - USERDOMAIN: string; - USERDOMAIN_ROAMINGPROFILE: string; - USERNAME: string; - USERPROFILE: string; - VSCODE_GIT_ASKPASS_EXTRA_ARGS: string; - VSCODE_GIT_ASKPASS_MAIN: string; - VSCODE_GIT_ASKPASS_NODE: string; - VSCODE_GIT_IPC_HANDLE: string; - VSCODE_INJECTION: string; - VSCODE_NONCE: string; - windir: string; - [key: `PUBLIC_${string}`]: undefined; - [key: `${string}`]: string | undefined; - } -} - -/** - * Similar to [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), but only includes variables that begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code. - * - * Note that public dynamic environment variables must all be sent from the server to the client, causing larger network requests — when possible, use `$env/static/public` instead. - * - * ```ts - * import { env } from '$env/dynamic/public'; - * console.log(env.PUBLIC_DEPLOYMENT_SPECIFIC_VARIABLE); - * ``` - */ -declare module '$env/dynamic/public' { - export const env: { - PUBLIC_ORIGIN: string; - PUBLIC_SHARE_PREFIX: string; - PUBLIC_GOOGLE_ANALYTICS_ID: string; - PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID: string; - PUBLIC_ANNOUNCEMENT_BANNERS: string; - PUBLIC_APP_NAME: string; - PUBLIC_APP_ASSETS: string; - PUBLIC_APP_COLOR: string; - PUBLIC_APP_DATA_SHARING: string; - PUBLIC_APP_DISCLAIMER: string; - PUBLIC_VERSION: string; - [key: `PUBLIC_${string}`]: string | undefined; - } -} diff --git a/spaces/miyaaa666/bingo/src/components/chat-list.tsx b/spaces/miyaaa666/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
    - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
    - ) -} diff --git a/spaces/monra/freegpt-webui-chimera/client/css/main.css b/spaces/monra/freegpt-webui-chimera/client/css/main.css deleted file mode 100644 index ec1f1dd80247747912e1976413a1e3897f1308db..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/client/css/main.css +++ /dev/null @@ -1,14 +0,0 @@ -.main-container { - display: flex; - padding: var(--section-gap); - height: 100vh; - justify-content: center; - box-sizing: border-box; -} - -@media screen and (max-width: 360px) { - .main-container { - padding: 0px; - height: 90vh; - } -} \ No newline at end of file diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/fused_act.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/fused_act.py deleted file mode 100644 index 74815adafbf7a37d5d4def41ac60dbdeefdbff30..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/fused_act.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - if bias: - self.bias = nn.Parameter(torch.zeros(channel)) - - else: - self.bias = None - - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, inputs): - return fused_leaky_relu(inputs, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(inputs, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if bias is not None: - rest_dim = [1] * (inputs.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - inputs + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope - ) - * scale - ) - - else: - return F.leaky_relu(inputs, negative_slope=negative_slope) * scale \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/criss/mining/mine.py b/spaces/mshukor/UnIVAL/fairseq/examples/criss/mining/mine.py deleted file mode 100644 index c872da196fe0df776622365748ad7963fee1f0a0..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/criss/mining/mine.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob -from subprocess import check_call - -try: - import faiss - - has_faiss = True -except ImportError: - has_faiss = False -import numpy as np - - -GB = 1024 * 1024 * 1024 - - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -def get_batches(directory, lang, prefix="all_avg_pool"): - print(f"Finding in {directory}/{prefix}.{lang}*") - files = glob.glob(f"{directory}/{prefix}.{lang}*") - emb_files = [] - txt_files = [] - for emb_fi in files: - emb_files.append(emb_fi) - txt_fi = emb_fi.replace(prefix, "sentences") - txt_files.append(txt_fi) - return emb_files, txt_files - - -def load_batch(emb_file, dim): - embeddings = np.fromfile(emb_file, dtype=np.float32) - num_rows = int(embeddings.shape[0] / dim) - embeddings = embeddings.reshape((num_rows, dim)) - faiss.normalize_L2(embeddings) - return embeddings - - -def knnGPU_sharded(x_batches_f, y_batches_f, dim, k, direction="x2y"): - if not has_faiss: - raise ImportError("Please install Faiss") - sims = [] - inds = [] - xfrom = 0 - xto = 0 - for x_batch_f in x_batches_f: - yfrom = 0 - yto = 0 - x_batch = load_batch(x_batch_f, dim) - xto = xfrom + x_batch.shape[0] - bsims, binds = [], [] - for y_batch_f in y_batches_f: - y_batch = load_batch(y_batch_f, dim) - neighbor_size = min(k, y_batch.shape[0]) - yto = yfrom + y_batch.shape[0] - print("{}-{} -> {}-{}".format(xfrom, xto, yfrom, yto)) - idx = faiss.IndexFlatIP(dim) - idx = faiss.index_cpu_to_all_gpus(idx) - idx.add(y_batch) - bsim, bind = idx.search(x_batch, neighbor_size) - - bsims.append(bsim) - binds.append(bind + yfrom) - yfrom += y_batch.shape[0] - del idx - del y_batch - bsims = np.concatenate(bsims, axis=1) - binds = np.concatenate(binds, axis=1) - aux = np.argsort(-bsims, axis=1) - sim_batch = np.zeros((x_batch.shape[0], k), dtype=np.float32) - ind_batch = np.zeros((x_batch.shape[0], k), dtype=np.int64) - for i in range(x_batch.shape[0]): - for j in range(k): - sim_batch[i, j] = bsims[i, aux[i, j]] - ind_batch[i, j] = binds[i, aux[i, j]] - sims.append(sim_batch) - inds.append(ind_batch) - xfrom += x_batch.shape[0] - del x_batch - sim = np.concatenate(sims, axis=0) - ind = np.concatenate(inds, axis=0) - return sim, ind - - -def score(sim, fwd_mean, bwd_mean, margin): - return margin(sim, (fwd_mean + bwd_mean) / 2) - - -def score_candidates( - sim_mat, candidate_inds, fwd_mean, bwd_mean, margin, verbose=False -): - print(" - scoring {:d} candidates".format(sim_mat.shape[0])) - scores = np.zeros(candidate_inds.shape) - for i in range(scores.shape[0]): - for j in range(scores.shape[1]): - k = int(candidate_inds[i, j]) - scores[i, j] = score(sim_mat[i, j], fwd_mean[i], bwd_mean[k], margin) - return scores - - -def load_text(files): - all_sentences = [] - for fi in files: - with open(fi) as sentence_fi: - for line in sentence_fi: - all_sentences.append(line.strip()) - print(f"Read {len(all_sentences)} sentences") - return all_sentences - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Mine bitext") - parser.add_argument("--src-lang", help="Source language") - parser.add_argument("--tgt-lang", help="Target language") - parser.add_argument( - "--dict-path", help="Path to dictionary file", default="dict.txt" - ) - parser.add_argument( - "--spm-path", help="Path to SPM model file", default="sentence.bpe.model" - ) - parser.add_argument("--dim", type=int, default=1024, help="Embedding dimension") - parser.add_argument("--mem", type=int, default=5, help="Memory in GB") - parser.add_argument("--src-dir", help="Source directory") - parser.add_argument("--tgt-dir", help="Target directory") - parser.add_argument("--output", help="Output path") - parser.add_argument( - "--neighborhood", type=int, default=4, help="Embedding dimension" - ) - parser.add_argument( - "--threshold", type=float, default=1.06, help="Threshold on mined bitext" - ) - parser.add_argument( - "--valid-size", - type=int, - default=2000, - help="Number of sentences used for validation set", - ) - parser.add_argument( - "--min-count", - type=int, - default=50000, - help="Min num sentences used for each language", - ) - args = parser.parse_args() - - x_batches_f, x_sents_f = get_batches(args.src_dir, args.src_lang) - y_batches_f, y_sents_f = get_batches(args.tgt_dir, args.tgt_lang) - margin = lambda a, b: a / b - y2x_sim, y2x_ind = knnGPU_sharded( - y_batches_f, x_batches_f, args.dim, args.neighborhood, direction="y2x" - ) - x2y_sim, x2y_ind = knnGPU_sharded( - x_batches_f, y_batches_f, args.dim, args.neighborhood, direction="x2y" - ) - - x2y_mean = x2y_sim.mean(axis=1) - y2x_mean = y2x_sim.mean(axis=1) - fwd_scores = score_candidates(x2y_sim, x2y_ind, x2y_mean, y2x_mean, margin) - bwd_scores = score_candidates(y2x_sim, y2x_ind, y2x_mean, x2y_mean, margin) - fwd_best = x2y_ind[np.arange(x2y_sim.shape[0]), fwd_scores.argmax(axis=1)] - bwd_best = y2x_ind[np.arange(y2x_sim.shape[0]), bwd_scores.argmax(axis=1)] - indices = np.stack( - ( - np.concatenate((np.arange(x2y_ind.shape[0]), bwd_best)), - np.concatenate((fwd_best, np.arange(y2x_ind.shape[0]))), - ), - axis=1, - ) - scores = np.concatenate((fwd_scores.max(axis=1), bwd_scores.max(axis=1))) - - x_sentences = load_text(x_sents_f) - y_sentences = load_text(y_sents_f) - - threshold = args.threshold - min_count = args.min_count - seen_src, seen_trg = set(), set() - directory = args.output - call(f"mkdir -p {directory}") - src_out = open( - f"{directory}/all.{args.src_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - tgt_out = open( - f"{directory}/all.{args.tgt_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - scores_out = open( - f"{directory}/all.scores", mode="w", encoding="utf-8", errors="surrogateescape" - ) - count = 0 - for i in np.argsort(-scores): - src_ind, trg_ind = indices[i] - if src_ind not in seen_src and trg_ind not in seen_trg: - seen_src.add(src_ind) - seen_trg.add(trg_ind) - if scores[i] > threshold or count < min_count: - if x_sentences[src_ind]: - print(scores[i], file=scores_out) - print(x_sentences[src_ind], file=src_out) - print(y_sentences[trg_ind], file=tgt_out) - count += 1 - else: - print(f"Ignoring sentence: {x_sentences[src_ind]}") - src_out.close() - tgt_out.close() - scores_out.close() - - print(f"Found {count} pairs for threshold={threshold}") - with open(f"{directory}/all.{args.src_lang}") as all_s, open( - f"{directory}/all.{args.tgt_lang}" - ) as all_t, open(f"{directory}/valid.{args.src_lang}", "w") as valid_s, open( - f"{directory}/valid.{args.tgt_lang}", "w" - ) as valid_t, open( - f"{directory}/train.{args.src_lang}", "w" - ) as train_s, open( - f"{directory}/train.{args.tgt_lang}", "w" - ) as train_t: - count = 0 - for s_line, t_line in zip(all_s, all_t): - s_line = s_line.split("\t")[1] - t_line = t_line.split("\t")[1] - if count >= args.valid_size: - train_s.write(s_line) - train_t.write(t_line) - else: - valid_s.write(s_line) - valid_t.write(t_line) - count += 1 diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/README.md deleted file mode 100644 index 3810311f30f99f0a07fd8e5d3723bffeba9948c3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Paraphrasing with round-trip translation and mixture of experts - -Machine translation models can be used to paraphrase text by translating it to -an intermediate language and back (round-trip translation). - -This example shows how to paraphrase text by first passing it to an -English-French translation model, followed by a French-English [mixture of -experts translation model](/examples/translation_moe). - -##### 0. Setup - -Clone fairseq from source and install necessary dependencies: -```bash -git clone https://github.com/pytorch/fairseq.git -cd fairseq -pip install --editable . -pip install sacremoses sentencepiece -``` - -##### 1. Download models -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.en-fr.tar.gz -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.fr-en.hMoEup.tar.gz -tar -xzvf paraphraser.en-fr.tar.gz -tar -xzvf paraphraser.fr-en.hMoEup.tar.gz -``` - -##### 2. Paraphrase -```bash -python examples/paraphraser/paraphrase.py \ - --en2fr paraphraser.en-fr \ - --fr2en paraphraser.fr-en.hMoEup -# Example input: -# The new date for the Games, postponed for a year in response to the coronavirus pandemic, gives athletes time to recalibrate their training schedules. -# Example outputs: -# Delayed one year in response to the coronavirus pandemic, the new date of the Games gives athletes time to rebalance their training schedule. -# The new date of the Games, which was rescheduled one year in response to the coronavirus (CV) pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, provides athletes with time to rebalance their training schedule. -# The Games' new date, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new Games date, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, which was postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to re-balance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their schedule of training. -# The new date of the Games, postponed one year in response to the pandemic of coronavirus, gives the athletes time to rebalance their training schedule. -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py deleted file mode 100644 index b5af7f723eb8047bc58db2f85234aea161fbc659..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch -import numpy as np -from scipy.signal import get_window -import librosa.util as librosa_util - - -def window_sumsquare(window, n_frames, hop_length=200, win_length=800, - n_fft=800, dtype=np.float32, norm=None): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - - n_frames : int > 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm)**2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/colorize_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/colorize_dataset.py deleted file mode 100644 index 6ef097bff1a013f4944b1cb55e1e7e4e2480b3a6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/colorize_dataset.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class ColorizeDataset(BaseWrapperDataset): - """ Adds 'colors' property to net input that is obtained from the provided color getter for use by models """ - - def __init__(self, dataset, color_getter): - super().__init__(dataset) - self.color_getter = color_getter - - def collater(self, samples): - base_collate = super().collater(samples) - if len(base_collate) > 0: - base_collate["net_input"]["colors"] = torch.tensor( - list(self.color_getter(self.dataset, s["id"]) for s in samples), - dtype=torch.long, - ) - return base_collate diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/fairseq_decoder.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/fairseq_decoder.py deleted file mode 100644 index 4f1e8b52a2e0a50199050f11cc613ab02ca9febe..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/fairseq_decoder.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch.nn as nn -from fairseq import utils -from torch import Tensor - - -class FairseqDecoder(nn.Module): - """Base class for decoders.""" - - def __init__(self, dictionary): - super().__init__() - self.dictionary = dictionary - self.onnx_trace = False - self.adaptive_softmax = None - - - def forward(self, prev_output_tokens, encoder_out=None, **kwargs): - """ - Args: - prev_output_tokens (LongTensor): shifted output tokens of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (dict, optional): output from the encoder, used for - encoder-side attention - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - x, extra = self.extract_features( - prev_output_tokens, encoder_out=encoder_out, **kwargs - ) - x = self.output_layer(x) - return x, extra - - def extract_features(self, prev_output_tokens, encoder_out=None, **kwargs): - """ - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def output_layer(self, features, **kwargs): - """ - Project features to the default output size, e.g., vocabulary size. - - Args: - features (Tensor): features returned by *extract_features*. - """ - raise NotImplementedError - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def get_normalized_probs_scriptable( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - - if hasattr(self, "adaptive_softmax") and self.adaptive_softmax is not None: - if sample is not None: - assert "target" in sample - target = sample["target"] - else: - target = None - out = self.adaptive_softmax.get_log_prob(net_output[0], target=target) - return out.exp_() if not log_probs else out - - logits = net_output[0] - if log_probs: - return utils.log_softmax(logits, dim=-1, onnx_trace=self.onnx_trace) - else: - return utils.softmax(logits, dim=-1, onnx_trace=self.onnx_trace) - - def max_positions(self): - """Maximum input length supported by the decoder.""" - return 1e6 # an arbitrary large number - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade old state dicts to work with newer code.""" - return state_dict - - def prepare_for_onnx_export_(self): - self.onnx_trace = True diff --git a/spaces/mshukor/UnIVAL/models/search.py b/spaces/mshukor/UnIVAL/models/search.py deleted file mode 100644 index 568612212bdbbe787c7ab64017f8170ec67619f8..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/search.py +++ /dev/null @@ -1,814 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -import math -from typing import List, Optional - -import torch -import torch.nn as nn -from fairseq.token_generation_constraints import ( - ConstraintState, - OrderedConstraintState, - UnorderedConstraintState, -) -from torch import Tensor - - -class Search(nn.Module): - def __init__(self, tgt_dict): - super().__init__() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.src_lengths = torch.tensor(-1) - self.supports_constraints = False - self.stop_on_max_len = False - - def step( - self, step, lprobs, scores, prev_output_tokens=None, original_batch_idxs=None - ): - """Take a single search step. - - Args: - step: the current search step, starting at 0 - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - scores: (bsz x input_beam_size x step) - the historical model scores of each hypothesis up to this point - prev_output_tokens: (bsz x step) - the previously generated oputput tokens - original_batch_idxs: (bsz) - the tensor with the batch indices, in the range [0, bsz) - this is useful in case there has been applied a re-ordering - and we need to know the orignal indices - - Return: A tuple of (scores, indices, beams) where: - scores: (bsz x output_beam_size) - the scores of the chosen elements; output_beam_size can be - larger than input_beam_size, e.g., we may return - 2*input_beam_size to account for EOS - indices: (bsz x output_beam_size) - the indices of the chosen elements - beams: (bsz x output_beam_size) - the hypothesis ids of the chosen elements, in the range [0, input_beam_size) - """ - raise NotImplementedError - - @torch.jit.export - def set_src_lengths(self, src_lengths): - self.src_lengths = src_lengths - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - """Initialize constraint states for constrained decoding (if supported). - - Args: - batch_constraints: (torch.Tensor, optional) - the list of constraints, in packed form - beam_size: (int) - the beam size - Returns: - *encoder_out* rearranged according to *new_order* - """ - pass - - def prune_sentences(self, batch_idxs: Tensor): - """ - Removes constraint states for completed sentences (if supported). - This is called from sequence_generator._generate() when sentences are - deleted from the batch. - - Args: - batch_idxs: Indices of *sentences* whose constraint state should be *kept*. - """ - pass - - def update_constraints(self, active_hypos: Tensor): - """ - Updates the constraint states by selecting the beam items that are retained. - This is called at each time step of sequence_generator._generate() when - the set of 2 * {beam_size} candidate hypotheses are reduced to the beam size. - - Args: - active_hypos: (batch size, beam size) - list of integers denoting, for each sentence, which beam candidate items - should be kept. - """ - pass - - -class BeamSearch(Search): - def __init__(self, tgt_dict): - super().__init__(tgt_dict) - self.constraint_states = None - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # At this point, beams_buf and indices_buf are single-dim and contain relative indices - return scores_buf, indices_buf, beams_buf - - -class PrefixConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, prefix_allowed_tokens_fn): - super().__init__(tgt_dict) - self.prefix_allowed_tokens_fn = prefix_allowed_tokens_fn - self.stop_on_max_len = True - - @torch.jit.export - def apply_mask(self, x, prev_output_tokens, original_batch_idxs): - beam_size = x.shape[0] // original_batch_idxs.shape[0] - original_batch_idxs = ( - original_batch_idxs.unsqueeze(-1).repeat((1, beam_size)).flatten().tolist() - ) - - mask = torch.full_like(x, -math.inf) - for sent_i, (sent, batch_i) in enumerate( - zip(prev_output_tokens, original_batch_idxs) - ): - mask[sent_i, :, self.prefix_allowed_tokens_fn(batch_i, sent)] = 0 - - return mask - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Tensor, - prev_output_tokens: Tensor, - original_batch_idxs: Tensor, - ): - bsz, beam_size, vocab_size = lprobs.size() - - lprobs += self.apply_mask( - lprobs.view(bsz * beam_size, 1, vocab_size), - prev_output_tokens, - original_batch_idxs, - ).view(bsz, beam_size, vocab_size) - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - return scores_buf, indices_buf, beams_buf - - -class LexicallyConstrainedBeamSearch(Search): - """Implements lexically constrained beam search as described in - - Fast Lexically Constrained Decoding with Dynamic Beam - Allocation for Neural Machine Translation. Post & Vilar, - NAACL 2018. https://www.aclweb.org/anthology/N18-1119/ - - and - - Improved Lexically Constrained Decoding for Translation and - Monolingual Rewriting. Hu et al, NAACL - 2019. https://www.aclweb.org/anthology/N19-1090/ - - This is accomplished by maintaining, for each beam hypothesis, a - ConstraintState object (see constraints.py) that tracks which - constraints have been generated and using this information to - shape the beam for each input sentence. - """ - - def __init__(self, tgt_dict, representation): - super().__init__(tgt_dict) - self.representation = representation - self.vocab_size = len(tgt_dict) - self.num_cands = 0 - self.supports_constraints = True - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - self.constraint_states = [] - for constraint_tensor in batch_constraints: - if self.representation == "ordered": - constraint_state = OrderedConstraintState.create(constraint_tensor) - elif self.representation == "unordered": - constraint_state = UnorderedConstraintState.create(constraint_tensor) - - self.constraint_states.append([constraint_state for i in range(beam_size)]) - - @torch.jit.export - def prune_sentences(self, batch_idxs: Tensor): - self.constraint_states = [ - self.constraint_states[i] for i in batch_idxs.tolist() - ] - - @torch.jit.export - def update_constraints(self, active_hypos: Tensor): - if self.constraint_states: - batch_size = active_hypos.size(0) - for sentid in range(batch_size): - self.constraint_states[sentid] = [ - self.constraint_states[sentid][i] for i in active_hypos[sentid] - ] - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - """ - A constrained step builds a large candidates list from the following: - - the top 2 * {beam_size} items over the whole beam - - for each item in the beam - - the top {each_k} (default 1) - - all next constraints - We then compute the constrained state of each beam item, and assign - stripe codes: 0 to the best in each bank, 1 to the 2nd-best, and so - on. We then sort by (stripe, score), and truncate the list at - 2 * beam size. - - Args: - step: the decoder step - lprobs: (batch size, beam size, target vocab) - the target-vocab distributions for each item in the beam. - Retrun: A tuple of (scores, indices, beams, constraints) where: - scores: (batch, output beam size) - the scores of the chosen elements - indices: (batch, output beam size) - the target vocab indices of the chosen elements - beams: (batch, output beam size) - the 0-indexed hypothesis ids of the chosen elements - constraints: (batch, output beam size) - the new constraint states - """ - each_k = 1 - device = lprobs.device - - batch_size, beam_size, vocab_size = lprobs.size() - - self.num_cands = min( - # Just take the k-best. We'll get another k from the 1-best from each - # row, plus more from the constraints - beam_size * 2, - lprobs.view(batch_size, -1).size(1) - 1, # -1 so we never select pad - ) - - # STEP 0: Preliminary. Prevent EOS for unfinished hyps across all batch items - constraint_states = self.constraint_states - if constraint_states and step > 0: - not_finished_indices = [] - for sentno, sent_constraints in enumerate(constraint_states): - for beamno, state in enumerate(sent_constraints): - index = sentno * beam_size + beamno - if not state.finished: - not_finished_indices.append(index) - not_finished_indices = torch.tensor(not_finished_indices) - if not_finished_indices.numel() > 0: - lprobs.view(batch_size * beam_size, -1)[ - not_finished_indices, self.eos - ] = -math.inf - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam entry for each batch item - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(batch_size, -1), - self.num_cands, - ) - scores_buf, indices_buf = top_prediction - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # Short circuit if there are no constraints in this batch - if not constraint_states: - return scores_buf, indices_buf, beams_buf - - # STEP 1: get top-1 from each hypothesis across all sentences in the batch - if step > 0: - top_scores, top_indices = torch.topk( - lprobs.view(batch_size * beam_size, -1), - k=each_k, - dim=1, - ) - top_scores = top_scores.view(batch_size, -1) - top_indices = top_indices.view(batch_size, -1) - scores_buf = torch.cat((scores_buf, top_scores), dim=1) - indices_buf = torch.cat((indices_buf, top_indices), dim=1) - new_beams = torch.arange(0, beam_size, device=device).repeat(batch_size, 1) - beams_buf = torch.cat((beams_buf, new_beams), dim=1) - - # Now, process sentences in the batch one by one. - new_scores_buf = torch.zeros((batch_size, 2 * beam_size), device=device) - new_indices_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - new_beams_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - for sentno, states in enumerate(constraint_states): - scores, indices, beams, new_states = self.step_sentence( - step, - sentno, - lprobs[sentno], - constraint_states[sentno], - beams_buf[sentno].clone(), - indices_buf[sentno].clone(), - scores_buf[sentno].clone(), - ) - new_scores_buf[sentno] = scores - new_indices_buf[sentno] = indices - new_beams_buf[sentno] = beams - self.constraint_states[sentno] = new_states - - return new_scores_buf, new_indices_buf, new_beams_buf - - @torch.jit.export - def step_sentence( - self, - step: int, - sentno: int, - lprobs: Tensor, - constraint_states: List[List[ConstraintState]], - beams_buf: Tensor, - indices_buf: Tensor, - scores_buf: Tensor, - ): - """Does per-sentence processing. Adds all constraints for each - hypothesis to the list of candidates; then removes duplicates, - sorts, and dynamically stripes across the banks. All tensor inputs - are collapsed to those pertaining to a single input sentence. - """ - device = lprobs.device - - # STEP 2: Add all constraints for each beam item - for beamno, state in enumerate(constraint_states): - next_tokens = torch.tensor(list(state.next_tokens()), device=device).long() - if next_tokens.numel() != 0: - indices_buf = torch.cat((indices_buf, next_tokens)) - next_beams = ( - torch.tensor(beamno, device=device) - .repeat(next_tokens.size(0)) - .long() - ) - beams_buf = torch.cat((beams_buf, next_beams)) - next_values = lprobs[beamno].take(next_tokens.view(-1)) - scores_buf = torch.cat((scores_buf, next_values)) - - # At the 0th time step, there is just one beam item - if step == 0: - break - - # STEP 3: Compute the "bank" for each candidate. This is the - # number of constraints it's generated. We need this so that - # we can do round-robin allocation of the beam across these - # banks. If C is the number of constraints, we select the best - # item in bank C, then the best in bank C-1, etc, followed by - # the 2nd-best in bank C, the 2nd-best in bank C-1, etc, and so - # on, until the maximum beam size. We accomplish this by - # creating a sort key and striping across the banks. - - # Compute the new states for all candidates - cands_size = indices_buf.size(0) - constraint_states = [ - constraint_states[beams_buf[i]].advance(indices_buf[i]) - for i in range(cands_size) - ] - - banks = torch.tensor([state.bank for state in constraint_states], device=device) - - # STEP 4: Sort - num_constraint_tokens = len(state.tokens) - - # Sort by keys (bank, score) (i.e., sort banks together, and scores - # within banks). AFAIK pytorch doesn't support either stable sort or - # multi-key sorting, so we have to hack this. - MAX_SCORE = -100 - sort_key = (num_constraint_tokens - banks) * MAX_SCORE + scores_buf - sort_values, sort_indices = sort_key.sort(dim=0, descending=True) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - banks = banks[sort_indices] - - # Sort the constraints to follow suit - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 5: Remove duplicates. The topk calls (overall and - # per-row) plus the per-row generation of constraints will - # produce duplicates. Here we remove them. - - def roll(t): - """Rolls a 1d tensor left by 1. - - [0, 1, 2, 3, 4] becomes [4, 0, 1, 2, 3] - """ - return torch.cat((t[-1].unsqueeze(0), t[0:-1]), dim=0) - - # We map candidates (beam, token_id) to a single dimension. - # This is then shifted by 1. We can then easily identify - # duplicates and create a mask that identifies unique - # extensions. - uniques_mask = beams_buf * (self.vocab_size + 1) + indices_buf - uniques_mask = roll(uniques_mask) != uniques_mask - - # Use the mask to pare down the data structures - scores_buf = torch.masked_select(scores_buf, uniques_mask) - indices_buf = torch.masked_select(indices_buf, uniques_mask) - beams_buf = torch.masked_select(beams_buf, uniques_mask) - banks = torch.masked_select(banks, uniques_mask) - i = 1 - for mask in uniques_mask[1:]: - if not mask: - constraint_states.pop(i) - i += mask - - # STEP 6: Assign IDs round-robin across banks, sort, and - # truncate. Now that the candidates are sorted by (bank, - # score) and uniqed, we dynamically allocate the {beam_size} - # beam by striping across the candidates. These stripes will - # be used as sort keys to do round-robin selection. This is - # accomplished in a single pass with offsets. Sorting by - # highest-banks (furthest-along hypotheses) first ensures - # progress through the constraints. - # - # e.g., BANKS: 3 3 3 2 2 2 2 1 1 1 0 0 - # OLD STRIPES: 0 1 2 0 1 2 3 0 1 2 0 1 - # NEW STRIPES: 0 1+4 2+8 0+1 1+5 2+9 3+11 0+2 1+6 2+10 0+3 1+7 - # = 0 5 10 1 6 11 13 2 7 12 3 8 - # - # Sorting by this then gives the following banks: - # - # 3 2 1 0 3 2 1 0 3 2 1 2 - # - # We'll take the top {beam_size} of these. - stripe_offsets = [offset * (len(banks) + 1) for offset in range(len(banks) + 1)] - stripes = torch.zeros_like(banks) - cur_bank_count = -1 - cur_bank = banks[0] - for i, bank in enumerate(banks): - if bank != cur_bank: - cur_bank_count = 0 - cur_bank = bank - else: - cur_bank_count += 1 - stripes[i] = num_constraint_tokens - bank + stripe_offsets[cur_bank_count] - - # STEP 7: Sort by the stripes values - sort_values, sort_indices = stripes.sort(dim=0) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 8: Truncate to the candidates size! - scores_buf = scores_buf[: self.num_cands] - indices_buf = indices_buf[: self.num_cands] - beams_buf = beams_buf[: self.num_cands] - - return scores_buf, indices_buf, beams_buf, constraint_states - - -class LengthConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, min_len_a, min_len_b, max_len_a, max_len_b): - super().__init__(tgt_dict) - self.min_len_a = min_len_a - self.min_len_b = min_len_b - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.beam = BeamSearch(tgt_dict) - self.needs_src_lengths = True - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - min_lens = self.min_len_a * self.src_lengths + self.min_len_b - max_lens = self.max_len_a * self.src_lengths + self.max_len_b - lprobs[step < min_lens, :, self.eos] = -math.inf - lprobs[step >= max_lens, :, self.eos] = 0 - return self.beam.step(step, lprobs, scores) - - -class DiverseBeamSearch(Search): - """Diverse Beam Search. - - See "Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence - Models" for details. - - We only implement the Hamming Diversity penalty here, which performed best - in the original paper. - """ - - def __init__(self, tgt_dict, num_groups, diversity_strength): - super().__init__(tgt_dict) - self.num_groups = num_groups - self.diversity_strength = -diversity_strength - self.beam = BeamSearch(tgt_dict) - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - if beam_size % self.num_groups != 0: - raise ValueError( - "DiverseBeamSearch requires --beam to be divisible by the number of groups" - ) - - # initialize diversity penalty - diversity_buf = torch.zeros(lprobs[:, 0, :].size()).to(lprobs) - - scores_G, indices_G, beams_G = [], [], [] - for g in range(self.num_groups): - lprobs_g = lprobs[:, g :: self.num_groups, :] - scores_g = scores[:, g :: self.num_groups, :] if step > 0 else None - - # apply diversity penalty - if g > 0: - lprobs_g = torch.add( - lprobs_g, - other=diversity_buf.unsqueeze(1), - alpha=self.diversity_strength, - ) - else: - lprobs_g = lprobs_g.contiguous() - - scores_buf, indices_buf, beams_buf = self.beam.step( - step, lprobs_g, scores_g - ) - beams_buf.mul_(self.num_groups).add_(g) - - scores_G.append(scores_buf.clone()) - indices_G.append(indices_buf.clone()) - beams_G.append(beams_buf.clone()) - - # update diversity penalty - diversity_buf.scatter_add_( - 1, indices_buf, torch.ones(indices_buf.size()).to(diversity_buf) - ) - - # interleave results from different groups - scores_buf = torch.stack(scores_G, dim=2).view(bsz, -1) - indices_buf = torch.stack(indices_G, dim=2).view(bsz, -1) - beams_buf = torch.stack(beams_G, dim=2).view(bsz, -1) - return scores_buf, indices_buf, beams_buf - - -class Sampling(Search): - sampling_topk: int - sampling_topp: float - - def __init__(self, tgt_dict, sampling_topk=-1, sampling_topp=-1.0): - super().__init__(tgt_dict) - self.sampling_topk = sampling_topk - self.sampling_topp = sampling_topp - - def _sample_topp(self, lprobs): - """Sample among the smallest set of elements whose cumulative probability mass exceeds p. - - See `"The Curious Case of Neural Text Degeneration" - (Holtzman et al., 2019) `_. - - Args: - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - - Return: A tuple of (trimed_probs, truncated_indices) where: - trimed_probs: (bsz x input_beam_size x ?) - the model's probabilities over the elements selected to sample from. The - width of the third dimension is determined by top-P. - truncated_indices: (bsz x input_beam_size x ?) - the indices of the chosen elements. - """ - probs = lprobs.exp_() - - # sort the last dimension (vocab dimension) in descending order - sorted_probs, sorted_indices = probs.sort(descending=True) - - # compute a mask to indicate the words to be included in the top-P set. - cumsum_probs = sorted_probs.cumsum(dim=2) - mask = cumsum_probs.lt(self.sampling_topp) - - # note that mask was computed by 'lt'. One more word needs to be included - # so that the cumulative probability mass can exceed p. - cumsum_mask = mask.cumsum(dim=2) - last_included = cumsum_mask[:, :, -1:] - last_included.clamp_(0, mask.size()[2] - 1) - mask = mask.scatter_(2, last_included, 1) - - # truncate unnecessary dims. - max_dim = last_included.max() - truncated_mask = mask[:, :, : max_dim + 1] - truncated_probs = sorted_probs[:, :, : max_dim + 1] - truncated_indices = sorted_indices[:, :, : max_dim + 1] - - # trim the words that are not in top-P by setting their probabilities - # to 0, so that they would not be sampled later. - trim_mask = ~truncated_mask - trimed_probs = truncated_probs.masked_fill_(trim_mask, 0) - return trimed_probs, truncated_indices - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - - if self.sampling_topp > 0: - # only sample from the smallest set of words whose cumulative probability mass exceeds p - probs, top_indices = self._sample_topp(lprobs) - elif self.sampling_topk > 0: - # only sample from top-k candidates - lprobs, top_indices = lprobs.topk(self.sampling_topk) - probs = lprobs.exp_() - else: - probs = lprobs.exp_() - - # dummy data to be consistent with true branch for type check - top_indices = torch.empty(0).to(probs) - # sample - if step == 0: - indices_buf = torch.multinomial( - probs.view(bsz, -1), - beam_size, - replacement=True, - ).view(bsz, beam_size) - else: - indices_buf = torch.multinomial( - probs.view(bsz * beam_size, -1), - 1, - replacement=True, - ).view(bsz, beam_size) - - if step == 0: - # expand to beam size - probs = probs.expand(bsz, beam_size, -1) - - # gather scores - scores_buf = torch.gather(probs, dim=2, index=indices_buf.unsqueeze(-1)) - scores_buf = scores_buf.log_().view(bsz, -1) - - # remap indices if using top-k or top-P sampling - if self.sampling_topk > 0 or self.sampling_topp > 0: - indices_buf = torch.gather( - top_indices.expand(bsz, beam_size, -1), - dim=2, - index=indices_buf.unsqueeze(-1), - ).squeeze(2) - - if step == 0: - beams_buf = indices_buf.new_zeros(bsz, beam_size) - else: - beams_buf = torch.arange(0, beam_size).to(indices_buf).repeat(bsz, 1) - # make scores cumulative - scores_buf.add_( - torch.gather(scores[:, :, step - 1], dim=1, index=beams_buf) - ) - - return scores_buf, indices_buf, beams_buf - - -class DiverseSiblingsSearch(Search): - """ - Beam search with diverse siblings. - - See "A Simple, Fast Diverse Decoding Algorithm for Neural Generation" for details. - https://arxiv.org/abs/1611.08562 - - 1/ Calculate hypotheses for each beam - 2/ Intra-sibling ordering - 3/ Rewrite scores - 4/ Choose top K hypotheses - - if diversity_rate == 0 is equivalent to BeamSearch - """ - - def __init__(self, tgt_dict, diversity_rate): - super().__init__(tgt_dict) - self.diversity_rate = diversity_rate - self.beam = BeamSearch(tgt_dict) - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - k = min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ) - s_list: List[Tensor] - i_list: List[Tensor] - s_list = [torch.empty(0).to(lprobs) for i in range(beam_size)] - i_list = [torch.LongTensor().to(device=lprobs.device) for i in range(beam_size)] - sibling_score = torch.arange(1, k + 1).to(lprobs) * self.diversity_rate - - if step == 0: - return self.beam.step(step, lprobs, scores) - lprobs.add_(scores[:, :, step - 1].unsqueeze(-1)) - - # 1/ Calculate hypotheses for each beam - for i in range(beam_size): - torch.topk(lprobs[:, i, :].view(bsz, -1), k, out=(s_list[i], i_list[i])) - i_list[i].fmod_(vocab_size) - - # 2/ Intra-sibling ordering by default from topk + 3/ Rewrite scores - s_list[i].sub_(sibling_score) - - # 4/ Choose top K hypotheses - indices = torch.stack(i_list, dim=1).view(bsz, -1) - - final_scores = torch.empty(0).to(lprobs) - final_indices = torch.LongTensor().to(device=lprobs.device) - final_beams = torch.LongTensor().to(device=lprobs.device) - (final_scores, final_indices) = torch.topk( - torch.stack(s_list, dim=1).view(bsz, -1), - k, - ) - - final_beams = final_indices // k - - for i in range(bsz): - final_indices[i] = indices[i][final_indices[i]] - - return final_scores, final_indices, final_beams diff --git a/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/eval/eval_refcoco.sh b/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/eval/eval_refcoco.sh deleted file mode 100644 index c5a415dc514b1795d7ad862b2213b28b2581969e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/eval/eval_refcoco.sh +++ /dev/null @@ -1,147 +0,0 @@ -#!/usr/bin/env bash - -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -# Number of GPUs per GPU worker -export GPUS_PER_NODE=8 -# Number of GPU workers, for single-worker training, please set to 1 -export NUM_NODES=$SLURM_NNODES -# The ip address of the rank-0 worker, for single-worker training, please set to localhost -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -# The port for communication -export MASTER_PORT=12350 -# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 -export RANK=$SLURM_NODEID - -echo "MASTER_ADDR: $MASTER_ADDR" -echo "RANK :$RANK" -echo "NUM_NODES :$NUM_NODES" -echo "GPUS_PER_NODE :$GPUS_PER_NODE" - -export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/ - -echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH" - -num_workers=0 - - - - - - -ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival -base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data -base_log_dir=/work/NAT/gda2204/mshukor/logs - - - - -bpe_dir=${ofa_dir}/utils/BPE -user_dir=${ofa_dir}/ofa_module - - - - -selected_cols=0,4,2,3 - - -image_encoder_name=resnet #vit_base_patch16_224 - - - - -exp_name=eval_refcocoplus_base_best_refcocoplus_ratacapsnlivqagroundofapt -path=${base_log_dir}/ofa/pretrained_models/average_models/refcocoplus_ratacapsnlivqagroundofapt.pt - -echo ${path} -result_path=${base_log_dir}/ofa/results/refcocoplus/${exp_name} -mkdir ${result_path} - - - - -data=${base_data_dir}/ofa/refcocoplus_data/refcocoplus_val.tsv -split='refcocoplus_val' - -python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=refcoco \ - --batch-size=16 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --beam=5 \ - --min-len=4 \ - --max-len-a=0 \ - --max-len-b=4 \ - --no-repeat-ngram-size=3 \ - --fp16 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\"}" - -data=${base_data_dir}/ofa/refcocoplus_data/refcocoplus_testA.tsv -split='refcocoplus_testA' -python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=refcoco \ - --batch-size=16 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --beam=5 \ - --min-len=4 \ - --max-len-a=0 \ - --max-len-b=4 \ - --no-repeat-ngram-size=3 \ - --fp16 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\"}" - - -data=${base_data_dir}/ofa/refcocoplus_data/refcocoplus_testB.tsv -split='refcocoplus_testB' - -python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=refcoco \ - --batch-size=16 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --beam=5 \ - --min-len=4 \ - --max-len-a=0 \ - --max-len-b=4 \ - --no-repeat-ngram-size=3 \ - --fp16 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\"}" diff --git a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_duke.py b/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_duke.py deleted file mode 100644 index 188dd70dc7a97337829cde7ecab749836ff3cbb7..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_duke.py +++ /dev/null @@ -1,27 +0,0 @@ -from medical_diffusion.data.datasets import DUKEDataset - -import matplotlib.pyplot as plt -from pathlib import Path -from torchvision.utils import save_image -from pathlib import Path - -path_out = Path().cwd()/'results'/'test' -path_out.mkdir(parents=True, exist_ok=True) - -ids = [int(path_file.stem.split('_')[-1]) for path_file in Path('/mnt/hdd/datasets/breast/Diffusion2D/images').glob('*.png')] -print(min(ids), max(ids)) # [0, 53] - -ds = DUKEDataset( - crawler_ext='png', - image_resize=None, - image_crop=None, - path_root='/mnt/hdd/datasets/breast/Diffusion2D/images', -) - -print(ds[0]) -images = [ds[n]['source'] for n in range(4)] - - - - -save_image(images, path_out/'test.png') \ No newline at end of file diff --git a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/quarto-html/tippy.css b/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/quarto-html/tippy.css deleted file mode 100644 index e6ae635cb1f82b176c18afa80dfa029c7a536e70..0000000000000000000000000000000000000000 --- a/spaces/muellerzr/accelerate-presentation/Accelerate_files/libs/quarto-html/tippy.css +++ /dev/null @@ -1 +0,0 @@ -.tippy-box[data-animation=fade][data-state=hidden]{opacity:0}[data-tippy-root]{max-width:calc(100vw - 10px)}.tippy-box{position:relative;background-color:#333;color:#fff;border-radius:4px;font-size:14px;line-height:1.4;white-space:normal;outline:0;transition-property:transform,visibility,opacity}.tippy-box[data-placement^=top]>.tippy-arrow{bottom:0}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-7px;left:0;border-width:8px 8px 0;border-top-color:initial;transform-origin:center top}.tippy-box[data-placement^=bottom]>.tippy-arrow{top:0}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-7px;left:0;border-width:0 8px 8px;border-bottom-color:initial;transform-origin:center bottom}.tippy-box[data-placement^=left]>.tippy-arrow{right:0}.tippy-box[data-placement^=left]>.tippy-arrow:before{border-width:8px 0 8px 8px;border-left-color:initial;right:-7px;transform-origin:center left}.tippy-box[data-placement^=right]>.tippy-arrow{left:0}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-7px;border-width:8px 8px 8px 0;border-right-color:initial;transform-origin:center right}.tippy-box[data-inertia][data-state=visible]{transition-timing-function:cubic-bezier(.54,1.5,.38,1.11)}.tippy-arrow{width:16px;height:16px;color:#333}.tippy-arrow:before{content:"";position:absolute;border-color:transparent;border-style:solid}.tippy-content{position:relative;padding:5px 9px;z-index:1} \ No newline at end of file diff --git a/spaces/mufssdr/jaidhus/Dockerfile b/spaces/mufssdr/jaidhus/Dockerfile deleted file mode 100644 index c9409e2a1656a1e6331c97f285bde00967ce6c84..0000000000000000000000000000000000000000 --- a/spaces/mufssdr/jaidhus/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -# 使用官方 Node.js 镜像作为基础镜像 -FROM node:lts-alpine3.18 - -# 设置工作目录 -WORKDIR /app - -# 将应用程序文件复制到容器中 -COPY . . - -# EXPOSE 3000 - -# 安装应用程序的依赖 -RUN npm install - -# 设置默认的命令,即启动应用程序 -CMD ["npm", "start"] diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dreambox Control Center (DCC) For Enigma2 - V 1.20 .rar UPD.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dreambox Control Center (DCC) For Enigma2 - V 1.20 .rar UPD.md deleted file mode 100644 index 773fb973cd4cbb4879dd8dc3fb05b671195161f8..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dreambox Control Center (DCC) For Enigma2 - V 1.20 .rar UPD.md +++ /dev/null @@ -1,47 +0,0 @@ - -

    How to Use Dreambox Control Center (DCC) For Enigma2 - V 1.20

    -

    Dreambox Control Center (DCC) is a program that allows you to manage your Enigma2 boxes over the network. You can use it to perform various tasks such as:

    -
      -
    • Network Management
    • -
    • Telnet Client
    • -
    • FTP client
    • -
    • Download Recordings
    • -
    • MP3 playlists
    • -
    • Web Interface
    • -
    • Settings Backup / Restore / Editor
    • -
    • and much more...
    • -
    -

    In this article, we will show you how to download and install DCC for Enigma2 - V 1.20, which is the latest version available as of April 2023.

    -

    Dreambox Control Center (DCC) For Enigma2 - V 1.20 .rar


    DOWNLOADhttps://urlcod.com/2uI9vo



    -

    Download DCC for Enigma2 - V 1.20

    -

    You can download DCC for Enigma2 - V 1.20 from the following link:

    -

    https://www.linuxsat-support.com/filebase/file/138-dcc-dreambox-control-center-by-bernyr/

    -

    -

    This link will take you to the Linux Satellite Support Community Filebase, where you can find the official download page for DCC by BernyR[^1^]. You will need to register and log in to access the file.

    -

    The file is a zip archive that contains two versions of DCC: one for Enigma2 and one for Enigma1. The file size is 8.68 MB.

    -

    Install DCC for Enigma2 - V 1.20

    -

    To install DCC for Enigma2 - V 1.20, you will need to unzip the downloaded file and run the executable file named "DCC_E2.exe". You can use any unzip program such as WinRAR or 7-Zip to extract the files.

    -

    Once you run the executable file, you will see a window that asks you to select your language. Choose your preferred language and click OK.

    -

    Then, you will see the main window of DCC, where you can configure your connection settings and access the various features of the program.

    -

    Configure Connection Settings

    -

    To connect your DCC to your Enigma2 box, you will need to enter some information such as:

    -
      -
    • The IP address of your box
    • -
    • The username and password of your box
    • -
    • The FTP port and Telnet port of your box
    • -
    • The path to your recordings folder on your box
    • -
    • The path to your web interface on your box
    • -
    -

    You can find these information by checking your box settings or by using a network scanner such as Advanced IP Scanner.

    -

    Once you enter the connection settings, click on "Reconnect" to establish a connection with your box. You should see a green icon on the bottom left corner of the window that indicates a successful connection.

    -

    Access Features of DCC

    -

    Now that you have connected your DCC to your Enigma2 box, you can access the various features of the program by using the buttons on the top menu bar. Here are some examples of what you can do with DCC:

    -
      -
    • You can use the "Network" button to scan your network for other devices, ping your box, or change your box IP address.
    • -
    • You can use the "Telnet" button to open a Telnet session with your box and execute commands.
    • -
    • You can use the "FTP" button to open an FTP session with your box and transfer files between your PC and your box.
    • -
    • You can use the "Recordings" button to download recordings from your box or delete recordings from your box.
    • -
    • You can use the "MP3" button to create MP3 playlists from your recordings or from files on your PC.
    • -
    • You can use the "WebIf" button to open the web interface of your box in

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ejercicios Reading 4 Eso Pdf !!LINK!! Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ejercicios Reading 4 Eso Pdf !!LINK!! Download.md deleted file mode 100644 index 3dd1b81d637cc1fbec309bdb6b4069053dda8378..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ejercicios Reading 4 Eso Pdf !!LINK!! Download.md +++ /dev/null @@ -1,22 +0,0 @@ - -

      How to Improve Your Reading Skills in English with Ejercicios Reading 4 ESO PDF Download

      -

      Reading is one of the most important skills for learning a foreign language. It helps you expand your vocabulary, understand grammar structures, and discover new cultures. However, reading can also be challenging, especially if you are not familiar with the topic or the level of difficulty. That's why you need to practice your reading skills regularly and use appropriate materials for your level.

      -

      ejercicios reading 4 eso pdf download


      Download Zip ===> https://urlcod.com/2uIb1f



      -

      One of the best resources for improving your reading skills in English is Ejercicios Reading 4 ESO PDF Download. This is a collection of exercises and texts designed for students of 4 ESO (the equivalent of 10th grade in the US) who want to improve their reading comprehension and fluency. The exercises cover different topics, such as travel, sports, music, environment, and more. They also include different types of questions, such as multiple choice, true/false, matching, and short answer.

      -

      Ejercicios Reading 4 ESO PDF Download is available online for free. You can download it from various websites, such as Live Worksheets, Agora Xtec, Ejercicios con Soluciones, Rosa Arroyo, and English Almería by Alcaina. You can also print them out and use them as worksheets or homework assignments.

      -

      By using Ejercicios Reading 4 ESO PDF Download, you will be able to improve your reading skills in English in a fun and effective way. You will learn new words and expressions, review grammar rules, and test your understanding of the texts. You will also be able to compare your answers with the solutions provided at the end of each exercise. This way, you will be able to monitor your progress and identify your strengths and weaknesses.

      -

      If you want to improve your reading skills in English, don't hesitate to download Ejercicios Reading 4 ESO PDF Download today. You will find it very useful and enjoyable. Happy reading!

      - -

      How to Read Effectively in English

      -

      Reading in English is not only a great way to learn new words and grammar, but also to enjoy stories, learn new information, and discover different perspectives. However, reading in English can also be challenging if you don't have a clear strategy or goal. Here are some tips to help you read effectively in English and make the most of your reading time.

      -

      1. Make time for reading

      -

      Setting aside a time to read regularly—even marking it directly into your English study schedule—will help ensure that you actually do it. You should try to spend at least 30 minutes every day on focused reading. Turn your reading process into a ritual: Find a quiet, comfortable spot with bright lighting to sit. Get everything you might need ready before you sit down, such as a pen, notebook and something to drink. Decide how long you will read. Put\u00A0all your electronics on silent mode (or turn them off) and put them away. If you have a specific process for reading, then your brain will know when you’re about to read and you’ll be more focused before you even start.

      -

      2. Choose materials you like

      -

      One of the most important factors for reading effectively is motivation. If you are interested in what you are reading, you will be more engaged and attentive. You will also enjoy reading more and want to continue. Therefore, choose materials that match your preferences, hobbies, goals, or curiosity. For example, if you like sports, you can read sports magazines or blogs. If you like science fiction, you can read novels or comics in that genre. If you want to learn more about a certain topic, you can read articles or books about it.

      -

      3. Choose materials at the right level

      -

      Another important factor for reading effectively is comprehension. If you don't understand what you are reading, you will get frustrated and lose interest. Therefore, choose materials that match your level of English. A good rule of thumb is to choose texts that have no more than 10% of unknown words or structures. This way, you will be able to understand the main idea and guess the meaning of some words from the context. You can also use a good dictionary to look up words that are essential for understanding the text.

      -

      4. Ask questions as you read

      -

      A good way to improve your reading comprehension and critical thinking skills is to ask questions as you read. This will help you focus on the main points, identify the author's purpose and tone, and evaluate the arguments and evidence presented in the text. Some examples of questions you can ask are: What is the main idea of this paragraph? What is the author's opinion on this issue? How does the author support his or her claim? What is the tone of this text? Is it formal or informal? Is it serious or humorous? How do I feel about this text? Do I agree or disagree with the author? Why?

      -

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Personal Finances Pro 5.2 Activation Code.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Personal Finances Pro 5.2 Activation Code.md deleted file mode 100644 index fd083f68cc0a7d7969ed444b65e5f72b1c83455c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Personal Finances Pro 5.2 Activation Code.md +++ /dev/null @@ -1,28 +0,0 @@ - -

      How to Get Personal Finances Pro 5.2 Activation Code for Free

      -

      Personal Finances Pro is a powerful and user-friendly software that helps you manage your personal and family budget. With Personal Finances Pro, you can track your income and expenses, plan your budget and cut back on unnecessary spending, monitor your investments and loans, and generate reports and charts to visualize your financial situation.

      -

      Personal Finances Pro is compatible with Windows, Mac OS X, Linux, Android, and iOS devices. You can sync your data across multiple devices and access it from anywhere. You can also import and export your data in various formats, such as CSV, QIF, OFX, and XML.

      -

      Personal Finances Pro 5.2 Activation Code


      Download Zip ✸✸✸ https://urlcod.com/2uIa3q



      -

      Personal Finances Pro is not a free software, but you can get a 30-day trial version from the official website[^1^]. However, if you want to use it beyond the trial period, you need to purchase an activation code that costs $34.95 for a single-user license or $69.95 for a family license[^1^].

      -

      But what if you don't want to pay for the activation code? Is there a way to get Personal Finances Pro 5.2 activation code for free? The answer is yes, but you need to be careful. There are many websites that claim to offer free activation codes or cracks for Personal Finances Pro 5.2, but most of them are either fake or malicious. They may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information.

      -

      Therefore, we do not recommend you to download any activation codes or cracks from unknown sources. Instead, we suggest you to use a legitimate and safe method to get Personal Finances Pro 5.2 activation code for free. Here are the steps:

      -
        -
      1. Go to the official website of Personal Finances Pro[^1^] and download the trial version of the software.
      2. -
      3. Install the software on your computer and run it.
      4. -
      5. Click on the "Help" menu and select "Enter Activation Code".
      6. -
      7. Enter the following activation code: PFP-5A2B-8C9D-559E-4467. This is a valid activation code that we have obtained from a reliable source[^2^]. It will unlock all the features of Personal Finances Pro 5.2 for unlimited time.
      8. -
      9. Enjoy using Personal Finances Pro 5.2 for free!
      10. -
      -

      Note: This activation code is only for educational purposes. We do not encourage or support any illegal activities or piracy. If you like Personal Finances Pro 5.2 and find it useful, please support the developers by purchasing a license from the official website[^1^].

      - -

      Conclusion

      -

      Personal Finances Pro 5.2 is a great software that can help you manage your personal and family finances effectively. However, it is not a free software and you need an activation code to use it beyond the trial period. In this article, we have shown you how to get Personal Finances Pro 5.2 activation code for free using a legitimate and safe method. We hope this article was helpful and informative for you.

      -

      - -

      References

      -

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/nikhil5678/turkey-syria-earthquake-tweets/setup.sh b/spaces/nikhil5678/turkey-syria-earthquake-tweets/setup.sh deleted file mode 100644 index d39033d9e80cf02d18402def757d1fa489a3cef6..0000000000000000000000000000000000000000 --- a/spaces/nikhil5678/turkey-syria-earthquake-tweets/setup.sh +++ /dev/null @@ -1,9 +0,0 @@ -mkdir -p ~/.streamlit/ - -echo "\ -[server]\n\ -port = $PORT\n\ -enableCORS = false\n\ -headless = true\n\ -\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/nomic-ai/timdettmers_openassistant-guanaco/style.css b/spaces/nomic-ai/timdettmers_openassistant-guanaco/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/timdettmers_openassistant-guanaco/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nomic-ai/wikiann/README.md b/spaces/nomic-ai/wikiann/README.md deleted file mode 100644 index a279d27827ce6af041b708233c61c9518df771dc..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/wikiann/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: wikiann -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- \ No newline at end of file diff --git a/spaces/nonya21/hakurei-lit-6B/app.py b/spaces/nonya21/hakurei-lit-6B/app.py deleted file mode 100644 index 76e952d60b943c70fbba0a4ad6a4fee59d3a533f..0000000000000000000000000000000000000000 --- a/spaces/nonya21/hakurei-lit-6B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/hakurei/lit-6B").launch() \ No newline at end of file diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.cc b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.cc deleted file mode 100644 index 75adf01aa612f130b0e56862eb510308ea63e0d0..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fast_transcendentals.cc +++ /dev/null @@ -1,81 +0,0 @@ -// Copyright 2021 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "sparse_matmul/numerics/fast_transcendentals.h" - -namespace csrblocksparse { - -// Maximum desired precision of the output. -static constexpr int kMaxMantissaBits = 14; - -// Returns (and builds if not done yet) a static data table that implements -// tanh on fixed32 input, returning another fixed32 with the given number of -// mantissa bits (which is assumed to be less than the input mantissa bits). -// NOTE that this function is intended to be used only with fixed16 outputs that -// are sign-extended to 32 bits for convenience, and will return a nullptr -// if asked for more than |kMaxMantissaBits| of precision in the output table. -const int32_t* TanhTable(int num_mantissa_bits_out) { - if (num_mantissa_bits_out > kMaxMantissaBits) return nullptr; - // Static data dynamically created and never destructed. - static const int32_t* tanh_luts[kMaxMantissaBits]; - if (tanh_luts[num_mantissa_bits_out - 1] == nullptr) { - // Total bits is number each side of the binary point. - int tanh_lut_bits = num_mantissa_bits_out + kNumTanhExpBits; - // Offset is the number of negative numbers represented. - int tanh_offset = 1 << tanh_lut_bits; - // Size is double the offset plus one more for zero. - int tanh_size = tanh_offset * 2 + 1; - // Conversion between int and float. - float float_factor = static_cast(1 << num_mantissa_bits_out); - int* tanh_lut = new int[tanh_size]; - // Initialize the table. - for (int i = 0; i < tanh_size; ++i) { - float x = (i - tanh_offset) / float_factor; - tanh_lut[i] = static_cast(std::round(tanhf(x) * float_factor)); - } - tanh_luts[num_mantissa_bits_out - 1] = tanh_lut; - } - return tanh_luts[num_mantissa_bits_out - 1]; -} - -// As TanhTable, but for Sigmoid. -const int32_t* SigmoidTable(int num_mantissa_bits_out) { - if (num_mantissa_bits_out > kMaxMantissaBits) return nullptr; - // Static data dynamically created and never destructed. - static const int32_t* sigmoid_luts[kMaxMantissaBits]; - if (sigmoid_luts[num_mantissa_bits_out - 1] == nullptr) { - // Total bits is number each side of the binary point minus one for the fact - // that the gradient never exceeds 1/4. (Could probably use -2.) - int sigmoid_lut_bits = - num_mantissa_bits_out + kNumSigmoidExpBits - kNumExtraSigmoidShiftBits; - // Offset is the number of negative numbers represented. - int sigmoid_offset = 1 << sigmoid_lut_bits; - // Size is double the offset plus one more for zero. - int sigmoid_size = sigmoid_offset * 2 + 1; - // Conversion between int and float. - float float_factor = static_cast(1 << num_mantissa_bits_out); - int* sigmoid_lut = new int[sigmoid_size]; - // Initialize the table. - for (int i = 0; i < sigmoid_size; ++i) { - constexpr int kSigmoidFactor = 1 << kNumExtraSigmoidShiftBits; - float x = ((i - sigmoid_offset) * kSigmoidFactor) / float_factor; - float sigmoid = 1.0f / (1.0f + expf(-x)); - sigmoid_lut[i] = static_cast(std::round(sigmoid * float_factor)); - } - sigmoid_luts[num_mantissa_bits_out - 1] = sigmoid_lut; - } - return sigmoid_luts[num_mantissa_bits_out - 1]; -} - -} // namespace csrblocksparse diff --git a/spaces/oguzakif/video-object-remover/app.py b/spaces/oguzakif/video-object-remover/app.py deleted file mode 100644 index 464ba8d2cc5400f936f7fb65fc011cd7286fcac5..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/app.py +++ /dev/null @@ -1,304 +0,0 @@ -from PIL import Image -import gradio as gr -from FGT_codes.tool.video_inpainting import video_inpainting -from SiamMask.tools.test import * -from SiamMask.experiments.siammask_sharp.custom import Custom -from types import SimpleNamespace -import torch -import numpy as np -import torchvision -import cv2 -import sys -from os.path import exists, join, basename, splitext -import os -import argparse -from datetime import datetime - -project_name = '' - -SHARED_UI_WARNING = f'''### [NOTE] It is possible that you are waiting in a lengthy queue. -You can duplicate and use it with a paid private GPU. -Duplicate Space -''' -article = """
      -""" - -sys.path.append(project_name) - -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'tool'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'tool','configs'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'LAFC', 'flowCheckPoint'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'LAFC', 'checkpoint'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'FGT', 'checkpoint'))) -sys.path.append(os.path.abspath(join(project_name, 'FGT_codes', 'LAFC', - 'flowCheckPoint', 'raft-things.pth'))) - -exp_path = join(project_name, 'SiamMask/experiments/siammask_sharp') -pretrained_path1 = join(exp_path, 'SiamMask_DAVIS.pth') - -print(sys.path) - -torch.set_grad_enabled(False) - -# init SiamMask -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -cfg = load_config(SimpleNamespace(config=join(exp_path, 'config_davis.json'))) -siammask = Custom(anchors=cfg['anchors']) -siammask = load_pretrain(siammask, pretrained_path1) -siammask = siammask.eval().to(device) - -parser = argparse.ArgumentParser() -# parser.add_argument('--opt', default='configs/object_removal.yaml', -# help='Please select your config file for inference') -parser.add_argument('--opt', default=os.path.abspath(join(project_name, 'FGT_codes', 'tool','configs','object_removal.yaml')), - help='Please select your config file for inference') -# video completion -parser.add_argument('--mode', default='object_removal', choices=[ - 'object_removal', 'watermark_removal', 'video_extrapolation'], help="modes: object_removal / video_extrapolation") -parser.add_argument( - '--path', default='/myData/davis_resized/walking', help="dataset for evaluation") -parser.add_argument( - '--path_mask', default='/myData/dilateAnnotations_4/walking', help="mask for object removal") -parser.add_argument( - '--outroot', default=os.path.abspath(project_name), help="output directory") -parser.add_argument( - '--outfilename', default="result.mp4", help="output filename") -parser.add_argument('--consistencyThres', dest='consistencyThres', default=5, type=float, - help='flow consistency error threshold') -parser.add_argument('--alpha', dest='alpha', default=0.1, type=float) -parser.add_argument('--Nonlocal', dest='Nonlocal', - default=False, type=bool) - -# RAFT -# parser.add_argument( -# '--raft_model', default='../LAFC/flowCheckPoint/raft-things.pth', help="restore checkpoint") -parser.add_argument( - '--raft_model', default=os.path.abspath(join(project_name, 'FGT_codes', 'LAFC','flowCheckPoint','raft-things.pth')), help="restore checkpoint") -parser.add_argument('--small', action='store_true', help='use small model') -parser.add_argument('--mixed_precision', - action='store_true', help='use mixed precision') -parser.add_argument('--alternate_corr', action='store_true', - help='use efficent correlation implementation') - -# LAFC -# parser.add_argument('--lafc_ckpts', type=str, default='../LAFC/checkpoint') -parser.add_argument('--lafc_ckpts', type=str, default=os.path.abspath(join(project_name, 'FGT_codes', 'LAFC','checkpoint'))) - -# FGT -# parser.add_argument('--fgt_ckpts', type=str, default='../FGT/checkpoint') -parser.add_argument('--fgt_ckpts', type=str, default=os.path.abspath(join(project_name, 'FGT_codes', 'FGT','checkpoint'))) - - -# extrapolation -parser.add_argument('--H_scale', dest='H_scale', default=2, - type=float, help='H extrapolation scale') -parser.add_argument('--W_scale', dest='W_scale', default=2, - type=float, help='W extrapolation scale') - -# Image basic information -parser.add_argument('--imgH', type=int, default=256) -parser.add_argument('--imgW', type=int, default=432) -parser.add_argument('--flow_mask_dilates', type=int, default=8) -parser.add_argument('--frame_dilates', type=int, default=0) - -parser.add_argument('--gpu', type=int, default=0) - -# FGT inference parameters -parser.add_argument('--step', type=int, default=10) -parser.add_argument('--num_ref', type=int, default=-1) -parser.add_argument('--neighbor_stride', type=int, default=5) - -parser.add_argument('--out_fps', type=int, default=24) - -# visualization -parser.add_argument('--vis_flows', action='store_true', - help='Visualize the initialized flows') -parser.add_argument('--vis_completed_flows', - action='store_true', help='Visualize the completed flows') -parser.add_argument('--vis_prop', action='store_true', - help='Visualize the frames after stage-I filling (flow guided content propagation)') -parser.add_argument('--vis_frame', action='store_true', - help='Visualize frames') - -args = parser.parse_args() - - -def getBoundaries(mask): - if mask is None: - return 0, 0, 0, 0 - - indexes = np.where((mask == [255, 255, 255]).all(axis=2)) - print(indexes) - x1 = min(indexes[1]) - y1 = min(indexes[0]) - x2 = max(indexes[1]) - y2 = max(indexes[0]) - - return x1, y1, (x2-x1), (y2-y1) - - -def track_and_mask(vid, masked_frame, original_list, mask_list, in_fps, dt_string): - x, y, w, h = getBoundaries(masked_frame) - f = 0 - - #turn 3d mask into 2d mask - masked_frame = cv2.cvtColor(masked_frame, cv2.COLOR_BGR2GRAY) - #add first mask frame of the video by default - mask_list.append(masked_frame) - video_capture = cv2.VideoCapture() - - if video_capture.open(vid): - width, height = int(video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int( - video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) - fps = video_capture.get(cv2.CAP_PROP_FPS) - - in_fps = fps - # can't write out mp4, so try to write into an AVI file - video_writer = cv2.VideoWriter( - dt_string+"_output.avi", cv2.VideoWriter_fourcc(*'MP42'), fps, (width, height)) - - while video_capture.isOpened(): - ret, frame = video_capture.read() - - if not ret: - break - - # frame = cv2.resize(frame, (w - w % 8, h - h % 8)) - if f == 0: - target_pos = np.array([x + w / 2, y + h / 2]) - target_sz = np.array([w, h]) - # init tracker - state = siamese_init( - frame, target_pos, target_sz, siammask, cfg['hp'], device=device) - original_list.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) - frame[:, :, 2] = (masked_frame > 0) * 255 + \ - (masked_frame == 0) * frame[:, :, 2] - else: - # track - state = siamese_track( - state, frame, mask_enable=True, refine_enable=True, device=device) - original_list.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) - mask = state['mask'] > state['p'].seg_thr - frame[:, :, 2] = (mask > 0) * 255 + \ - (mask == 0) * frame[:, :, 2] - - mask = mask.astype(np.uint8) # convert to an unsigned byte - mask = mask * 255 - mask_list.append(mask) - - video_writer.write(frame) - - f = f + 1 - - video_capture.release() - video_writer.release() - - else: - print("can't open the given input video file!") - - outname = (dt_string+"_output.avi") - print('Original Frame Count: ',len(original_list)) - print('Mask Frame Count: ',len(mask_list)) - return original_list, mask_list, in_fps, outname - - -def inpaint_video(original_frame_list, mask_list, in_fps, dt_string): - outname = (dt_string+"_result.mp4") - args.out_fps = in_fps - args.outfilename = outname - video_inpainting(args, original_frame_list, mask_list) - original_frame_list = [] - mask_list = [] - return outname,original_frame_list, mask_list - - -def get_first_frame(video): - if(video == None): - return gr.ImageMask() - video_capture = cv2.VideoCapture() - if video_capture.open(video): - width, height = int(video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int( - video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) - - if video_capture.isOpened(): - ret, frame = video_capture.read() - RGB_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - - return RGB_frame - - -def drawRectangle(frame, mask): - x1, y1, x2, y2 = getBoundaries(mask) - - return cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2) - - -def getStartEndPoints(mask): - if mask is None: - return 0, 0, 0, 0 - - indexes = np.where((mask == [255, 255, 255]).all(axis=2)) - print(indexes) - x1 = min(indexes[1]) - y1 = min(indexes[0]) - x2 = max(indexes[1]) - y2 = max(indexes[0]) - - return x1, y1, x2, y2 - -def reset_components(): - return gr.update(value=None),gr.update(value=None, interactive=False),gr.update(value=None, interactive=False), [],[],24,datetime.now().strftime("%d_%m_%Y_%H_%M_%S") - -title = """

      Video Object Remover

      """ - -with gr.Blocks() as demo: - gr.Markdown(title) - gr.Markdown(SHARED_UI_WARNING) - gr.Markdown( - """ - - Start uploading the video you wanted to edit. - - Select the object you want to remove from the video. - - Click on Run to start the process. - """) - gr.Markdown(article) - - original_frame_list = gr.State([]) - mask_list = gr.State([]) - # constants - in_fps = gr.State(24) - dt_string = gr.State(datetime.now().strftime("%d_%m_%Y_%H_%M_%S")) - with gr.Row(): - with gr.Column(scale=2): - with gr.Row(): - in_video = gr.PlayableVideo(label="Input Video", show_progress=True) - with gr.Row(): - first_frame = gr.ImageMask(label="Select Object") - with gr.Row(): - approve_mask = gr.Button(value="Run",variant="primary") - with gr.Column(scale=1): - with gr.Row(): - original_image = gr.Image(interactive=False) - with gr.Row(): - masked_image = gr.Image(interactive=False) - with gr.Column(scale=2): - out_video = gr.Video(label="Segmented Video", show_progress=True) - out_video_inpaint = gr.Video(label="Inpainted Video", show_progress=True) - # track_mask = gr.Button(value="Track and Mask") - # inpaint = gr.Button(value="Inpaint") - - in_video.change(fn=get_first_frame, inputs=[ - in_video], outputs=[first_frame]) - in_video.clear(fn=reset_components, outputs=[first_frame, original_image, masked_image, original_frame_list, mask_list, in_fps, dt_string]) - approve_mask.click(lambda x: [x['image'], x['mask']], first_frame, [ - original_image, masked_image]) - masked_image.change(fn=track_and_mask,inputs=[ - in_video, masked_image, original_frame_list, mask_list, in_fps, dt_string], outputs=[original_frame_list, mask_list, in_fps, out_video]) - out_video.change(fn=inpaint_video, inputs=[original_frame_list, mask_list, in_fps, dt_string], outputs=[out_video_inpaint, original_frame_list, mask_list]) - # track_mask.click(fn=track_and_mask, inputs=[ - # in_video, masked_image, original_frame_list, mask_list, in_fps, dt_string], outputs=[original_frame_list, mask_list, in_fps, out_video]) - # inpaint.click(fn=inpaint_video, inputs=[original_frame_list, mask_list, in_fps, dt_string], - # outputs=[out_video_inpaint, original_frame_list, mask_list]) - - -demo.launch(debug=True) diff --git a/spaces/openlamm/LAMM/model/__init__.py b/spaces/openlamm/LAMM/model/__init__.py deleted file mode 100644 index 0bc8cc2332a822444aa585a5006bd2e484c9e244..0000000000000000000000000000000000000000 --- a/spaces/openlamm/LAMM/model/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .openlamm import LAMMPEFTModel diff --git a/spaces/osanseviero/whisper_demo_builder/README.md b/spaces/osanseviero/whisper_demo_builder/README.md deleted file mode 100644 index 5f57fb4e80a49be4f20649df42b7a33649686b53..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/whisper_demo_builder/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Build your Whisper demo -emoji: 😻 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.10.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: osanseviero/repo_duplicator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/prior_transformer.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/prior_transformer.py deleted file mode 100644 index 8ada0a7c08a5aa43583d5e58c16ba2cef3ae5230..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/prior_transformer.py +++ /dev/null @@ -1,380 +0,0 @@ -from dataclasses import dataclass -from typing import Dict, Optional, Union - -import torch -import torch.nn.functional as F -from torch import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..loaders import UNet2DConditionLoadersMixin -from ..utils import BaseOutput -from .attention import BasicTransformerBlock -from .attention_processor import ( - ADDED_KV_ATTENTION_PROCESSORS, - CROSS_ATTENTION_PROCESSORS, - AttentionProcessor, - AttnAddedKVProcessor, - AttnProcessor, -) -from .embeddings import TimestepEmbedding, Timesteps -from .modeling_utils import ModelMixin - - -@dataclass -class PriorTransformerOutput(BaseOutput): - """ - The output of [`PriorTransformer`]. - - Args: - predicted_image_embedding (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - The predicted CLIP image embedding conditioned on the CLIP text embedding input. - """ - - predicted_image_embedding: torch.FloatTensor - - -class PriorTransformer(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin): - """ - A Prior Transformer model. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 32): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head. - num_layers (`int`, *optional*, defaults to 20): The number of layers of Transformer blocks to use. - embedding_dim (`int`, *optional*, defaults to 768): The dimension of the model input `hidden_states` - num_embeddings (`int`, *optional*, defaults to 77): - The number of embeddings of the model input `hidden_states` - additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the - projected `hidden_states`. The actual length of the used `hidden_states` is `num_embeddings + - additional_embeddings`. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - time_embed_act_fn (`str`, *optional*, defaults to 'silu'): - The activation function to use to create timestep embeddings. - norm_in_type (`str`, *optional*, defaults to None): The normalization layer to apply on hidden states before - passing to Transformer blocks. Set it to `None` if normalization is not needed. - embedding_proj_norm_type (`str`, *optional*, defaults to None): - The normalization layer to apply on the input `proj_embedding`. Set it to `None` if normalization is not - needed. - encoder_hid_proj_type (`str`, *optional*, defaults to `linear`): - The projection layer to apply on the input `encoder_hidden_states`. Set it to `None` if - `encoder_hidden_states` is `None`. - added_emb_type (`str`, *optional*, defaults to `prd`): Additional embeddings to condition the model. - Choose from `prd` or `None`. if choose `prd`, it will prepend a token indicating the (quantized) dot - product between the text embedding and image embedding as proposed in the unclip paper - https://arxiv.org/abs/2204.06125 If it is `None`, no additional embeddings will be prepended. - time_embed_dim (`int, *optional*, defaults to None): The dimension of timestep embeddings. - If None, will be set to `num_attention_heads * attention_head_dim` - embedding_proj_dim (`int`, *optional*, default to None): - The dimension of `proj_embedding`. If None, will be set to `embedding_dim`. - clip_embed_dim (`int`, *optional*, default to None): - The dimension of the output. If None, will be set to `embedding_dim`. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 32, - attention_head_dim: int = 64, - num_layers: int = 20, - embedding_dim: int = 768, - num_embeddings=77, - additional_embeddings=4, - dropout: float = 0.0, - time_embed_act_fn: str = "silu", - norm_in_type: Optional[str] = None, # layer - embedding_proj_norm_type: Optional[str] = None, # layer - encoder_hid_proj_type: Optional[str] = "linear", # linear - added_emb_type: Optional[str] = "prd", # prd - time_embed_dim: Optional[int] = None, - embedding_proj_dim: Optional[int] = None, - clip_embed_dim: Optional[int] = None, - ): - super().__init__() - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - self.additional_embeddings = additional_embeddings - - time_embed_dim = time_embed_dim or inner_dim - embedding_proj_dim = embedding_proj_dim or embedding_dim - clip_embed_dim = clip_embed_dim or embedding_dim - - self.time_proj = Timesteps(inner_dim, True, 0) - self.time_embedding = TimestepEmbedding(inner_dim, time_embed_dim, out_dim=inner_dim, act_fn=time_embed_act_fn) - - self.proj_in = nn.Linear(embedding_dim, inner_dim) - - if embedding_proj_norm_type is None: - self.embedding_proj_norm = None - elif embedding_proj_norm_type == "layer": - self.embedding_proj_norm = nn.LayerNorm(embedding_proj_dim) - else: - raise ValueError(f"unsupported embedding_proj_norm_type: {embedding_proj_norm_type}") - - self.embedding_proj = nn.Linear(embedding_proj_dim, inner_dim) - - if encoder_hid_proj_type is None: - self.encoder_hidden_states_proj = None - elif encoder_hid_proj_type == "linear": - self.encoder_hidden_states_proj = nn.Linear(embedding_dim, inner_dim) - else: - raise ValueError(f"unsupported encoder_hid_proj_type: {encoder_hid_proj_type}") - - self.positional_embedding = nn.Parameter(torch.zeros(1, num_embeddings + additional_embeddings, inner_dim)) - - if added_emb_type == "prd": - self.prd_embedding = nn.Parameter(torch.zeros(1, 1, inner_dim)) - elif added_emb_type is None: - self.prd_embedding = None - else: - raise ValueError( - f"`added_emb_type`: {added_emb_type} is not supported. Make sure to choose one of `'prd'` or `None`." - ) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - activation_fn="gelu", - attention_bias=True, - ) - for d in range(num_layers) - ] - ) - - if norm_in_type == "layer": - self.norm_in = nn.LayerNorm(inner_dim) - elif norm_in_type is None: - self.norm_in = None - else: - raise ValueError(f"Unsupported norm_in_type: {norm_in_type}.") - - self.norm_out = nn.LayerNorm(inner_dim) - - self.proj_to_clip_embeddings = nn.Linear(inner_dim, clip_embed_dim) - - causal_attention_mask = torch.full( - [num_embeddings + additional_embeddings, num_embeddings + additional_embeddings], -10000.0 - ) - causal_attention_mask.triu_(1) - causal_attention_mask = causal_attention_mask[None, ...] - self.register_buffer("causal_attention_mask", causal_attention_mask, persistent=False) - - self.clip_mean = nn.Parameter(torch.zeros(1, clip_embed_dim)) - self.clip_std = nn.Parameter(torch.zeros(1, clip_embed_dim)) - - @property - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.attn_processors - def attn_processors(self) -> Dict[str, AttentionProcessor]: - r""" - Returns: - `dict` of attention processors: A dictionary containing all attention processors used in the model with - indexed by its weight name. - """ - # set recursively - processors = {} - - def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): - if hasattr(module, "get_processor"): - processors[f"{name}.processor"] = module.get_processor(return_deprecated_lora=True) - - for sub_name, child in module.named_children(): - fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) - - return processors - - for name, module in self.named_children(): - fn_recursive_add_processors(name, module, processors) - - return processors - - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_attn_processor - def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): - r""" - Sets the attention processor to use to compute attention. - - Parameters: - processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): - The instantiated processor class or a dictionary of processor classes that will be set as the processor - for **all** `Attention` layers. - - If `processor` is a dict, the key needs to define the path to the corresponding cross attention - processor. This is strongly recommended when setting trainable attention processors. - - """ - count = len(self.attn_processors.keys()) - - if isinstance(processor, dict) and len(processor) != count: - raise ValueError( - f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" - f" number of attention layers: {count}. Please make sure to pass {count} processor classes." - ) - - def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): - if hasattr(module, "set_processor"): - if not isinstance(processor, dict): - module.set_processor(processor) - else: - module.set_processor(processor.pop(f"{name}.processor")) - - for sub_name, child in module.named_children(): - fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) - - for name, module in self.named_children(): - fn_recursive_attn_processor(name, module, processor) - - # Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor - def set_default_attn_processor(self): - """ - Disables custom attention processors and sets the default attention implementation. - """ - if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): - processor = AttnAddedKVProcessor() - elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): - processor = AttnProcessor() - else: - raise ValueError( - f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" - ) - - self.set_attn_processor(processor) - - def forward( - self, - hidden_states, - timestep: Union[torch.Tensor, float, int], - proj_embedding: torch.FloatTensor, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.BoolTensor] = None, - return_dict: bool = True, - ): - """ - The [`PriorTransformer`] forward method. - - Args: - hidden_states (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - The currently predicted image embeddings. - timestep (`torch.LongTensor`): - Current denoising step. - proj_embedding (`torch.FloatTensor` of shape `(batch_size, embedding_dim)`): - Projected embedding vector the denoising process is conditioned on. - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, num_embeddings, embedding_dim)`): - Hidden states of the text embeddings the denoising process is conditioned on. - attention_mask (`torch.BoolTensor` of shape `(batch_size, num_embeddings)`): - Text mask for the text embeddings. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.prior_transformer.PriorTransformerOutput`] instead of a plain - tuple. - - Returns: - [`~models.prior_transformer.PriorTransformerOutput`] or `tuple`: - If return_dict is True, a [`~models.prior_transformer.PriorTransformerOutput`] is returned, otherwise a - tuple is returned where the first element is the sample tensor. - """ - batch_size = hidden_states.shape[0] - - timesteps = timestep - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=hidden_states.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(hidden_states.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps * torch.ones(batch_size, dtype=timesteps.dtype, device=timesteps.device) - - timesteps_projected = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might be fp16, so we need to cast here. - timesteps_projected = timesteps_projected.to(dtype=self.dtype) - time_embeddings = self.time_embedding(timesteps_projected) - - if self.embedding_proj_norm is not None: - proj_embedding = self.embedding_proj_norm(proj_embedding) - - proj_embeddings = self.embedding_proj(proj_embedding) - if self.encoder_hidden_states_proj is not None and encoder_hidden_states is not None: - encoder_hidden_states = self.encoder_hidden_states_proj(encoder_hidden_states) - elif self.encoder_hidden_states_proj is not None and encoder_hidden_states is None: - raise ValueError("`encoder_hidden_states_proj` requires `encoder_hidden_states` to be set") - - hidden_states = self.proj_in(hidden_states) - - positional_embeddings = self.positional_embedding.to(hidden_states.dtype) - - additional_embeds = [] - additional_embeddings_len = 0 - - if encoder_hidden_states is not None: - additional_embeds.append(encoder_hidden_states) - additional_embeddings_len += encoder_hidden_states.shape[1] - - if len(proj_embeddings.shape) == 2: - proj_embeddings = proj_embeddings[:, None, :] - - if len(hidden_states.shape) == 2: - hidden_states = hidden_states[:, None, :] - - additional_embeds = additional_embeds + [ - proj_embeddings, - time_embeddings[:, None, :], - hidden_states, - ] - - if self.prd_embedding is not None: - prd_embedding = self.prd_embedding.to(hidden_states.dtype).expand(batch_size, -1, -1) - additional_embeds.append(prd_embedding) - - hidden_states = torch.cat( - additional_embeds, - dim=1, - ) - - # Allow positional_embedding to not include the `addtional_embeddings` and instead pad it with zeros for these additional tokens - additional_embeddings_len = additional_embeddings_len + proj_embeddings.shape[1] + 1 - if positional_embeddings.shape[1] < hidden_states.shape[1]: - positional_embeddings = F.pad( - positional_embeddings, - ( - 0, - 0, - additional_embeddings_len, - self.prd_embedding.shape[1] if self.prd_embedding is not None else 0, - ), - value=0.0, - ) - - hidden_states = hidden_states + positional_embeddings - - if attention_mask is not None: - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = F.pad(attention_mask, (0, self.additional_embeddings), value=0.0) - attention_mask = (attention_mask[:, None, :] + self.causal_attention_mask).to(hidden_states.dtype) - attention_mask = attention_mask.repeat_interleave(self.config.num_attention_heads, dim=0) - - if self.norm_in is not None: - hidden_states = self.norm_in(hidden_states) - - for block in self.transformer_blocks: - hidden_states = block(hidden_states, attention_mask=attention_mask) - - hidden_states = self.norm_out(hidden_states) - - if self.prd_embedding is not None: - hidden_states = hidden_states[:, -1] - else: - hidden_states = hidden_states[:, additional_embeddings_len:] - - predicted_image_embedding = self.proj_to_clip_embeddings(hidden_states) - - if not return_dict: - return (predicted_image_embedding,) - - return PriorTransformerOutput(predicted_image_embedding=predicted_image_embedding) - - def post_process_latents(self, prior_latents): - prior_latents = (prior_latents * self.clip_std) + self.clip_mean - return prior_latents diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_3d_blocks.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_3d_blocks.py deleted file mode 100644 index ab5c393518e2ad8edf21069dfcd417392001569d..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_3d_blocks.py +++ /dev/null @@ -1,679 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import torch -from torch import nn - -from .resnet import Downsample2D, ResnetBlock2D, TemporalConvLayer, Upsample2D -from .transformer_2d import Transformer2DModel -from .transformer_temporal import TransformerTemporalModel - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - num_attention_heads, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=True, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - if down_block_type == "DownBlock3D": - return DownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock3D") - return CrossAttnDownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - num_attention_heads=num_attention_heads, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - num_attention_heads, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=True, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - if up_block_type == "UpBlock3D": - return UpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock3D") - return CrossAttnUpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - num_attention_heads=num_attention_heads, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock3DCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - num_attention_heads=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=True, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.num_attention_heads = num_attention_heads - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - temp_convs = [ - TemporalConvLayer( - in_channels, - in_channels, - dropout=0.1, - ) - ] - attentions = [] - temp_attentions = [] - - for _ in range(num_layers): - attentions.append( - Transformer2DModel( - in_channels // num_attention_heads, - num_attention_heads, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - temp_attentions.append( - TransformerTemporalModel( - in_channels // num_attention_heads, - num_attention_heads, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - temp_convs.append( - TemporalConvLayer( - in_channels, - in_channels, - dropout=0.1, - ) - ) - - self.resnets = nn.ModuleList(resnets) - self.temp_convs = nn.ModuleList(temp_convs) - self.attentions = nn.ModuleList(attentions) - self.temp_attentions = nn.ModuleList(temp_attentions) - - def forward( - self, - hidden_states, - temb=None, - encoder_hidden_states=None, - attention_mask=None, - num_frames=1, - cross_attention_kwargs=None, - ): - hidden_states = self.resnets[0](hidden_states, temb) - hidden_states = self.temp_convs[0](hidden_states, num_frames=num_frames) - for attn, temp_attn, resnet, temp_conv in zip( - self.attentions, self.temp_attentions, self.resnets[1:], self.temp_convs[1:] - ): - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - hidden_states = temp_attn( - hidden_states, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, return_dict=False - )[0] - hidden_states = resnet(hidden_states, temb) - hidden_states = temp_conv(hidden_states, num_frames=num_frames) - - return hidden_states - - -class CrossAttnDownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - num_attention_heads=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - temp_attentions = [] - temp_convs = [] - - self.has_cross_attention = True - self.num_attention_heads = num_attention_heads - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - temp_convs.append( - TemporalConvLayer( - out_channels, - out_channels, - dropout=0.1, - ) - ) - attentions.append( - Transformer2DModel( - out_channels // num_attention_heads, - num_attention_heads, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - temp_attentions.append( - TransformerTemporalModel( - out_channels // num_attention_heads, - num_attention_heads, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.resnets = nn.ModuleList(resnets) - self.temp_convs = nn.ModuleList(temp_convs) - self.attentions = nn.ModuleList(attentions) - self.temp_attentions = nn.ModuleList(temp_attentions) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - temb=None, - encoder_hidden_states=None, - attention_mask=None, - num_frames=1, - cross_attention_kwargs=None, - ): - # TODO(Patrick, William) - attention mask is not used - output_states = () - - for resnet, temp_conv, attn, temp_attn in zip( - self.resnets, self.temp_convs, self.attentions, self.temp_attentions - ): - hidden_states = resnet(hidden_states, temb) - hidden_states = temp_conv(hidden_states, num_frames=num_frames) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - hidden_states = temp_attn( - hidden_states, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, return_dict=False - )[0] - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - temp_convs = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - temp_convs.append( - TemporalConvLayer( - out_channels, - out_channels, - dropout=0.1, - ) - ) - - self.resnets = nn.ModuleList(resnets) - self.temp_convs = nn.ModuleList(temp_convs) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None, num_frames=1): - output_states = () - - for resnet, temp_conv in zip(self.resnets, self.temp_convs): - hidden_states = resnet(hidden_states, temb) - hidden_states = temp_conv(hidden_states, num_frames=num_frames) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnUpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - num_attention_heads=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - temp_convs = [] - attentions = [] - temp_attentions = [] - - self.has_cross_attention = True - self.num_attention_heads = num_attention_heads - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - temp_convs.append( - TemporalConvLayer( - out_channels, - out_channels, - dropout=0.1, - ) - ) - attentions.append( - Transformer2DModel( - out_channels // num_attention_heads, - num_attention_heads, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - temp_attentions.append( - TransformerTemporalModel( - out_channels // num_attention_heads, - num_attention_heads, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.resnets = nn.ModuleList(resnets) - self.temp_convs = nn.ModuleList(temp_convs) - self.attentions = nn.ModuleList(attentions) - self.temp_attentions = nn.ModuleList(temp_attentions) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - attention_mask=None, - num_frames=1, - cross_attention_kwargs=None, - ): - # TODO(Patrick, William) - attention mask is not used - for resnet, temp_conv, attn, temp_attn in zip( - self.resnets, self.temp_convs, self.attentions, self.temp_attentions - ): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = temp_conv(hidden_states, num_frames=num_frames) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - hidden_states = temp_attn( - hidden_states, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, return_dict=False - )[0] - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - temp_convs = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - temp_convs.append( - TemporalConvLayer( - out_channels, - out_channels, - dropout=0.1, - ) - ) - - self.resnets = nn.ModuleList(resnets) - self.temp_convs = nn.ModuleList(temp_convs) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None, num_frames=1): - for resnet, temp_conv in zip(self.resnets, self.temp_convs): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = temp_conv(hidden_states, num_frames=num_frames) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states diff --git a/spaces/parkyzh/bingo/src/components/chat-panel.tsx b/spaces/parkyzh/bingo/src/components/chat-panel.tsx deleted file mode 100644 index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input]) - - return ( -
      { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
      -
      -
      -
      -
      -
      -
      - -
      -
      -
      -
      - chat -