diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DaVinci Resolve Download A Reddit Users Solution to the Blackmagic Design Website.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DaVinci Resolve Download A Reddit Users Solution to the Blackmagic Design Website.md
deleted file mode 100644
index 88858c0ee2f5b0b7d53004b27e16abff1b3b5e52..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DaVinci Resolve Download A Reddit Users Solution to the Blackmagic Design Website.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
How to Download DaVinci Resolve for Free
-
DaVinci Resolve is a powerful and versatile video editing software that offers features such as color correction, visual effects, audio post-production, and more. It is used by professionals and hobbyists alike for various projects, from films and TV shows to YouTube videos and podcasts.
-
If you want to try DaVinci Resolve for yourself, you can download it for free from the official website of Blackmagic Design, the company that develops and distributes the software. However, finding the download link on their website can be tricky, as it is not very prominent or easy to navigate. Fortunately, there is a simpler way to access the download page, thanks to a Reddit user who shared a direct link to it.
Go to this Reddit post by u/whyareyouemailingme, who found a link that shows only DaVinci Resolve download links.
-
Click on the link that says https://www.blackmagicdesign.com/support/family/davinci-resolve-and-fusion. This will take you to the support page of Blackmagic Design, where you can see all the available versions of DaVinci Resolve and Fusion, another software for visual effects and motion graphics.
-
Choose the version of DaVinci Resolve that you want to download. You can either download the latest version (18.5 at the time of writing this article) or an older version if you have compatibility issues with your system or project. You can also choose between the Studio version, which requires a paid license and offers more features and performance, or the Free version, which has some limitations but is still very capable.
-
Click on the Download button next to your chosen version. This will prompt you to fill out a registration form with your name, email address, country, and some other information. You can also opt-in or opt-out of receiving newsletters and updates from Blackmagic Design.
-
After filling out the form, click on Register and Download. This will start the download process of the installer file for DaVinci Resolve. Depending on your internet speed and the size of the file, this may take some time.
-
Once the download is complete, locate the installer file on your computer and run it. Follow the instructions on the screen to install DaVinci Resolve on your system. You may need to restart your computer after the installation is done.
-
Launch DaVinci Resolve and enjoy editing your videos!
-
-
Tips and Tricks for Using DaVinci Resolve
-
-
If you are new to DaVinci Resolve, you can check out some tutorials and guides on their official website here. You can also find many helpful videos on YouTube and other platforms from various creators who share their tips and tricks for using the software.
-
If you encounter any issues or bugs with DaVinci Resolve, you can report them on their official forum here. You can also ask questions and get help from other users who may have faced similar problems or have solutions for them.
-
If you want to stay updated on the latest news and features of DaVinci Resolve, you can follow their official social media accounts on Facebook, Twitter, Instagram, and YouTube. You can also join their subreddit r/davinciresolve, where you can find useful resources, discussions, feedback, and inspiration from other users.
-
-
Conclusion
-
DaVinci Resolve
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Billu Barber 2009 Blu Ray 720p X264 Darkboy24 !FREE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Billu Barber 2009 Blu Ray 720p X264 Darkboy24 !FREE!.md
deleted file mode 100644
index 67483d1acd088f0f5af0f25d1d56381ee19499e1..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Billu Barber 2009 Blu Ray 720p X264 Darkboy24 !FREE!.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
Review: Billu Barber (2009) Blu Ray 720p X264 Darkboy24
-
Billu Barber is a 2009 Hindi comedy-drama film directed by Priyadarshan and starring Irrfan Khan, Lara Dutta, Shah Rukh Khan and Om Puri. The film is a remake of the Malayalam film Kadha Parayumbol (2007), which was also remade in Tamil as Kuselan (2008). The film tells the story of Billu (Irrfan Khan), a poor barber who lives in a village with his wife Bindiya (Lara Dutta) and their two children. His life changes when a famous actor Sahir Khan (Shah Rukh Khan), who happens to be his childhood friend, comes to shoot a film in his village. Billu becomes the center of attention as everyone wants to meet Sahir through him, but he is too shy and humble to approach his old friend.
-
The film was produced by Red Chillies Entertainment and distributed by Eros International. It was released on February 13, 2009 and received positive reviews from critics and audiences. The film was praised for its simple yet touching story, its humor, its performances, especially by Irrfan Khan and Shah Rukh Khan, and its music by Pritam. The film was also a commercial success, grossing over â¹100 crore worldwide.
The Blu Ray version of the film was released by Darkboy24, a popular torrent uploader who specializes in high-quality Hindi movies. The Blu Ray rip has a resolution of 720p and a bitrate of X264. The audio quality is also excellent, with a 5.1 channel surround sound. The file size is about 1 GB and can be downloaded from various torrent sites. The Blu Ray rip also includes English subtitles for non-Hindi speakers.
-
Billu Barber is a heartwarming and entertaining film that showcases the bond of friendship and the value of simplicity. It is a must-watch for fans of Irrfan Khan, Shah Rukh Khan and Priyadarshan. The Blu Ray rip by Darkboy24 is one of the best ways to enjoy this film in high definition.
-
-
The film also features some cameo appearances by other Bollywood stars, such as Kareena Kapoor, Deepika Padukone, Priyanka Chopra and Rajpal Yadav. They play themselves as actors who work with Sahir Khan in his film. The film also has some references to other films by Shah Rukh Khan and Priyadarshan, such as Om Shanti Om (2007) and Hera Pheri (2000).
-
The film was nominated for several awards, such as the Filmfare Awards, the IIFA Awards and the Screen Awards. It won the Best Actor (Critics) award for Irrfan Khan at the Filmfare Awards and the Best Supporting Actor award for Shah Rukh Khan at the Screen Awards. The film also received a special mention at the National Film Awards for its portrayal of the rural life and culture of India.
-
Billu Barber is a film that celebrates friendship, family and humanity. It is a film that will make you laugh, cry and smile. It is a film that you will remember for a long time. The Blu Ray rip by Darkboy24 is a great way to experience this film in high quality.
-
-
The film also has a strong social message about the importance of education and the dignity of labor. The film shows how Billu, despite being poor and illiterate, is respected and loved by his family and friends for his honesty and kindness. The film also shows how Sahir Khan, despite being rich and famous, is humble and generous towards his old friend and his village. The film also criticizes the hypocrisy and greed of some people who try to exploit Billu's friendship with Sahir for their own benefits.
-
The film also has a beautiful soundtrack composed by Pritam, with lyrics by Gulzar. The film features nine songs, sung by various singers such as Sukhwinder Singh, Rahat Fateh Ali Khan, Neeraj Shridhar, Sunidhi Chauhan and Abhijeet. Some of the popular songs from the film are "Marjaani", "Khudaya Khair", "Love Mera Hit Hit" and "You Get Me Rockin & Reeling". The songs are a mix of different genres, such as folk, qawwali, pop and rock. The songs also enhance the mood and emotions of the film.
-
Billu Barber is a film that will touch your heart and soul. It is a film that will make you appreciate the true meaning of friendship and happiness. It is a film that will inspire you to be a better person. The Blu Ray rip by Darkboy24 is an excellent way to watch this film in high definition.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cartelle Del Gioco Sinco FREE.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cartelle Del Gioco Sinco FREE.md
deleted file mode 100644
index d07cf452def9e01a3b56c1abfbc80507ed456918..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cartelle Del Gioco Sinco FREE.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
Cartelle del gioco sinco: il gioco da tavolo natalizio di origine napoletana
-
Se siete alla ricerca di un gioco da tavolo divertente e originale da fare con la famiglia o gli amici durante le feste natalizie, potreste provare le cartelle del gioco sinco. Si tratta di un gioco inventato a Napoli nel 1983 da Emilio Salvatore, un merciaio che si ispirò al bingo e alla tombola per creare una nuova variante con le carte napoletane[^1^] [^2^].
-
Le cartelle del gioco sinco sono composte da 25 caselle con le figure delle carte napoletane, dal 1 al 10 di ogni seme (coppe, spade, denari e bastoni). Ogni cartella ha una combinazione diversa di carte e ogni giocatore può acquistarne quante ne vuole[^2^] [^3^]. Il gioco richiede anche un mazzo di carte napoletane, delle fiches per segnare le caselle e cinque contenitori per i premi[^2^].
Il gioco si svolge così: si sceglie un conduttore che estrae le carte dal mazzo e le annuncia agli altri giocatori. Chi ha la carta estratta sulla propria cartella la copre con una fiche. Il primo giocatore che completa una delle cinque combinazioni possibili vince il premio corrispondente[^2^]. Le combinazioni sono le seguenti:
-
-
Centro: si copre la casella centrale della cartella.
-
Angolo: si coprono le quattro caselle agli angoli della cartella.
-
Poker: si coprono le quattro caselle in alto della cartella.
-
Rombo: si coprono le cinque caselle che formano un rombo intorno alla casella centrale.
-
Sinco: si coprono tutte le caselle della cartella.
Le cartelle del gioco sinco sono un modo simpatico e coinvolgente di passare il tempo in compagnia, mescolando fortuna e strategia. Il gioco è diventato una tradizione natalizia a Napoli e in altre città italiane, dove si trova facilmente nei negozi di giocattoli o nei mercatini[^1^] [^2^]. Se volete provare questo gioco originale e divertente, non vi resta che procurarvi le cartelle del gioco sinco e sfidare i vostri amici o parenti a colpi di carte napoletane!
-
-
Se vi state chiedendo come sono nate le cartelle del gioco sinco, la storia è piuttosto curiosa. L'ideatore del gioco, Emilio Salvatore, ebbe l'ispirazione durante una vacanza in crociera con la sua famiglia. Tra le varie attività di bordo, si divertì a giocare al bingo, un gioco di origine americana che ricorda la tombola. Fu così che pensò di creare un gioco simile ma con le carte napoletane, che sono tipiche della sua città e della sua cultura .
-
Tornato a Napoli, Salvatore realizzò le prime cartelle del gioco sinco con l'aiuto di un grafico e le provò con i suoi amici e parenti. Il gioco ebbe subito successo e Salvatore decise di produrlo in serie limitata e di venderlo nella sua merceria nel centro storico di Napoli, al Corso Vittorio Emanuele . La merceria è ancora esistente e nella vetrina si può ammirare il gioco originale conservato come una reliquia.
-
-
Il gioco del sinco attirò l'attenzione di alcuni acquirenti interessati a distribuirlo su larga scala, ma Salvatore rifiutò tutte le offerte e preferì mantenere i diritti della sua creazione. Il gioco rimase quindi un prodotto artigianale e locale, che si diffuse per passaparola tra i napoletani e gli appassionati di giochi da tavolo . Oggi il gioco del sinco è considerato una tradizione natalizia napoletana e una testimonianza della creatività e dell'ingegno di questa città .
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Air I Breathe by Nicole C. Mullen Mp3 and Lyrics Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Air I Breathe by Nicole C. Mullen Mp3 and Lyrics Download.md
deleted file mode 100644
index 034808ff8100236c062dc695dd81bb96faff6c29..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Air I Breathe by Nicole C. Mullen Mp3 and Lyrics Download.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
You Are The Air I Breathe Mp3 Download: How to Find and Enjoy This Inspirational Song
-
Have you ever heard a song that touched your soul and lifted your spirit? A song that made you feel closer to God and grateful for His presence in your life? A song that reminded you of His love and grace? If you are looking for such a song, then you should listen to You Are The Air I Breathe by Jerry K. This is a beautiful gospel song that expresses how much we depend on God for everything. In this article, we will tell you more about this song, how to download it as an mp3 file, and how to enjoy it to the fullest.
-
What is You Are The Air I Breathe?
-
You Are The Air I Breathe is a gospel song that was released in 2017 by Jerry K, a Nigerian singer and songwriter. The song is also known as Air I Breathe or The Air I Breath. It is a worship song that praises God as the source of our life, our peace, our joy, and our strength. It is a song that acknowledges how much we need God in every moment of our existence.
The song has a simple but powerful message: God is everything to us. He is the air that we breathe, the water that we drink, the food that we eat. He is our healer, our provider, our protector, our redeemer. He is our father, our friend, our king, our lord. He is worthy of all our praise and worship. He is faithful and gracious to us. He never leaves us nor forsakes us. He is always with us and for us.
-
The Singer and Composer of the Song
-
The Popularity and Impact of the Song
-
The song has become very popular among gospel music lovers, especially in Nigeria and other African countries. It has received millions of views and downloads on various platforms, such as YouTube, Spotify, iTunes, SoundCloud, among others. It has also been nominated and won several awards, such as the LIMA Awards, the AGMMA Awards, the GMA Awards, among others. The song has also impacted many lives and testimonies, as people have shared how the song has inspired them, comforted them, healed them, and drawn them closer to God.
-
How to Download You Are The Air I Breathe Mp3?
-
If you want to download You Are The Air I Breathe as an mp3 file, you might be wondering why you should do that and how you can do that. Well, we have some answers for you.
-
The Benefits of Downloading Mp3 Files
-
Mp3 files are digital audio files that can be played on various devices, such as computers, smartphones, tablets, mp3 players, etc. They are convenient and easy to use, as they can be stored, transferred, and shared without any hassle. They are also compatible with most media players and applications. They are also economical and efficient, as they take up less space and consume less data than other formats. They are also of high quality and fidelity, as they preserve the original sound and clarity of the audio.
-
The Best Websites to Download You Are The Air I Breathe Mp3
-
There are many websites that offer free or paid downloads of You Are The Air I Breathe mp3. However, not all of them are reliable or safe. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them might also have low-quality or corrupted files that can ruin your listening experience. Therefore, you should be careful and selective when choosing a website to download You Are The Air I Breathe mp3. Here are some of the best websites that we recommend:
- A Nigerian website that specializes in gospel music downloads - Offers free and fast downloads of You Are The Air I Breathe mp3 - Provides a brief description and lyrics of the song - Allows users to rate and comment on the song - Has a user-friendly and mobile-responsive interface
- A global website that offers a wide range of music downloads - Offers free and easy downloads of You Are The Air I Breathe mp3 - Provides a preview and a download link of the song - Allows users to search and browse by artist, genre, album, etc. - Has a simple and minimalist design
- A Nigerian website that features various entertainment content - Offers free and secure downloads of You Are The Air I Breathe mp3 - Provides a detailed review and analysis of the song - Allows users to stream and download the song - Has a colorful and attractive layout
-
-
-
The Steps to Download You Are The Air I Breathe Mp3
-
The steps to download You Are The Air I Breathe mp3 might vary depending on the website you choose. However, here are some general steps that you can follow:
-
you are the air i breathe mat kearney mp3 download
-jerry k air i breathe mp3 download free
-you are the air i breathe lyrics and mp3
-download air i breathe by jerry k audio
-mat kearney air i breathe mp3 free download
-you are the air i breathe gospel song mp3
-air i breathe by jerry k video download
-you are the air i breathe oh lord mp3 download
-mat kearney air i breathe lyrics video
-you are the balm of gilead mp3 download
-air i breathe by jerry k instrumental
-you are the rose of sharon mp3 song download
-mat kearney air i breathe chords and tabs
-you are my peace in the midst of storm mp3
-air i breathe by jerry k ft frank edwards
-you are the air i breathe hillsong worship mp3
-mat kearney air i breathe album download zip
-you are the air i breathe piano tutorial
-air i breathe by jerry k live performance
-you are the air i breathe christian song mp3
-mat kearney air i breathe remix mp3 download
-you are the air i breathe sheet music pdf
-air i breathe by jerry k cover by nathaniel bassey
-you are the air i breathe worship song mp3
-mat kearney air i breathe acoustic version mp3
-you are the air i breathe karaoke mp3 download
-air i breathe by jerry k lyrics and chords
-you are the air i breathe song meaning and analysis
-mat kearney air i breathe spotify playlist
-you are the air i breathe guitar lesson youtube
-air i breathe by jerry k ringtone download mp3
-you are the air i breathe background vocals mp3
-mat kearney air i breathe shazam music discovery app[^1^]
-you are the air i breathe praisezion gospel songs[^2^]
-air i breathe by jerry k gospelsongs.com.ng[^3^]
-
-
Visit the website that offers You Are The Air I Breathe mp3 download.
-
Search for the song by typing its name or artist in the search box.
-
Select the song from the search results or browse through the categories.
-
Click on the download button or link that appears next to the song.
-
Choose the format and quality of the file that you want to download.
-
Save the file to your device or cloud storage.
-
Enjoy listening to You Are The Air I Breathe mp3 anytime and anywhere.
-
-
How to Enjoy You Are The Air I Breathe Mp3?
-
Now that you have downloaded You Are The Air I Breathe mp3, you might be wondering how to enjoy it to the fullest. Well, we have some tips for you.
-
The Best Times and Places to Listen to the Song
-
You Are The Air I Breathe is a song that can be enjoyed at any time and place, as long as you have a device that can play mp3 files and a pair of headphones or speakers. However, some of the best times and places to listen to the song are:
-
-
When you wake up in the morning, you can listen to the song as a way of starting your day with gratitude and praise to God.
-
When you are going through a hard time, you can listen to the song as a way of finding comfort and hope in God's presence and promises.
-
When you are feeling happy and blessed, you can listen to the song as a way of celebrating and thanking God for His goodness and mercy.
-
When you are in your personal or family devotional time, you can listen to the song as a way of worshiping and adoring God with your whole heart.
-
When you are in your car, office, or home, you can listen to the song as a way of creating a peaceful and joyful atmosphere around you.
-
-
The Best Ways to Share and Recommend the Song
-
You Are The Air I Breathe is a song that can be shared and recommended to anyone who loves gospel music or who needs to hear a message of God's love and grace. Some of the best ways to share and recommend the song are:
-
-
You can send the mp3 file or the download link to your friends, family, or colleagues via email, text, or social media.
-
You can create a playlist or a mixtape that includes You Are The Air I Breathe and other gospel songs that you like, and share it with others.
-
You can write a review or a testimonial about how the song has impacted your life, and post it on your blog, website, or social media.
-
You can sing or play the song in your church, school, or community, and invite others to join you.
-
You can request or dedicate the song to someone on your favorite radio station or podcast.
-
-
The Best Resources to Learn More About the Song
-
If you want to learn more about You Are The Air I Breathe, such as its lyrics, chords, background story, etc., you can check out some of these resources:
You Are The Air I Breathe is a wonderful gospel song that expresses how much we depend on God for everything. It is a song that praises God as the source of our life, our peace, our joy, and our strength. It is a song that acknowledges how much we need God in every moment of our existence. In this article, we have told you more about this song, how to download it as an mp3 file, and how to enjoy it to the fullest. We hope that this article has been helpful and informative for you. We also hope that you will listen to You Are The Air I Breathe mp3 and experience its power and beauty for yourself. Thank you for reading this article. God bless you!
-
FAQs
-
Here are some frequently asked questions about You Are The Air I Breathe mp3:
-
Q: Where can I find the lyrics of You Are The Air I Breathe?
A: You Are The Air I Breathe is 5 minutes and 31 seconds long.
-
Q: What genre is You Are The Air I Breathe?
-
A: You Are The Air I Breathe is a gospel song that belongs to the contemporary worship genre.
-
Q: Who are some other artists that sing similar songs to You Are The Air I Breathe?
-
A: Some other artists that sing similar songs to You Are The Air I Breathe are Sinach, Nathaniel Bassey, Frank Edwards, Mercy Chinwo, Eben, etc.
-
Q: How can I support Jerry K and his music ministry?
-
A: You can support Jerry K and his music ministry by buying his albums and singles, attending his concerts and events, praying for him and his family, donating to his cause, etc.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/CarX Drift Racing 2 MOD APK Offline Mode with Realistic Physics and Graphics.md b/spaces/1phancelerku/anime-remove-background/CarX Drift Racing 2 MOD APK Offline Mode with Realistic Physics and Graphics.md
deleted file mode 100644
index 97279b3cf9e5f68e12dccc27faa2804efdb6d9bb..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/CarX Drift Racing 2 MOD APK Offline Mode with Realistic Physics and Graphics.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
CarX Drift Racing 2 Mod APK Offline: A Guide for Racing and Drifting Enthusiasts
-
Introduction
-
If you are a fan of racing and drifting games, you might have heard of CarX Drift Racing 2, one of the most popular and realistic games in this genre. But did you know that you can enjoy this game even more with a mod apk offline version? In this article, we will tell you everything you need to know about CarX Drift Racing 2 mod apk offline, including its features, benefits, and how to download and install it on your device. So, buckle up and get ready for some adrenaline-pumping action!
-
What is CarX Drift Racing 2?
-
CarX Drift Racing 2 is a sequel to the original CarX Drift Racing game, which has over 50 million downloads on Google Play Store. It is a racing and drifting game that lets you experience the thrill of driving powerful cars on various tracks and terrains. You can choose from over 80 cars, each with its own characteristics and performance. You can also customize your cars with different paint jobs, decals, wheels, spoilers, and more. You can compete with other players online or offline, join clubs, participate in tournaments, and earn rewards.
While CarX Drift Racing 2 is a free-to-play game, it also has some in-app purchases that can enhance your gameplay. For example, you can buy more money and gold to unlock new cars and tracks, or upgrade your existing ones. However, not everyone can afford to spend real money on these items, or they might not have a stable internet connection to play online. That's why downloading CarX Drift Racing 2 mod apk offline is a great option. With this version, you can enjoy all the features of the game without spending a dime or worrying about your internet connection. You can play the game anytime and anywhere you want.
-
Features of CarX Drift Racing 2 mod apk offline
-
CarX Drift Racing 2 mod apk offline has many features that make it superior to the original version. Here are some of them:
-
Unlimited money and gold
-
With CarX Drift Racing 2 mod apk offline, you don't have to worry about running out of money or gold. You will have unlimited amounts of both currencies, which you can use to buy anything you want in the game. You can unlock all the cars and tracks, upgrade your cars to the max level, and buy any customization items you like. You can also use money and gold to enter tournaments and events, or buy boosters and power-ups.
-
carx drift racing 2 mod apk unlimited money and gold
-carx drift racing 2 mod apk latest version download
-carx drift racing 2 mod apk android 1
-carx drift racing 2 mod apk revdl
-carx drift racing 2 mod apk obb
-carx drift racing 2 mod apk rexdl
-carx drift racing 2 mod apk happymod
-carx drift racing 2 mod apk all cars unlocked
-carx drift racing 2 mod apk free shopping
-carx drift racing 2 mod apk no root
-carx drift racing 2 mod apk data
-carx drift racing 2 mod apk pure
-carx drift racing 2 mod apk vip unlocked
-carx drift racing 2 mod apk unlimited coins and gems
-carx drift racing 2 mod apk full version
-carx drift racing 2 mod apk mega
-carx drift racing 2 mod apk an1
-carx drift racing 2 mod apk hack
-carx drift racing 2 mod apk cheat
-carx drift racing 2 mod apk premium
-carx drift racing 2 mod apk pro
-carx drift racing 2 mod apk cracked
-carx drift racing 2 mod apk mirror
-carx drift racing 2 mod apk apkpure
-carx drift racing 2 mod apk apkmody
-carx drift racing 2 mod apk apkmirror
-carx drift racing 2 mod apk apknite
-carx drift racing 2 mod apk apksolo
-carx drift racing 2 mod apk apksmash
-carx drift racing 2 mod apk apkspeedy
-carx drift racing 2 mod apk apksafety
-carx drift racing 2 mod apk apksmartphone
-carx drift racing 2 mod apk apksupermarket
-carx drift racing 2 mod apk apksweetness
-carx drift racing 2 mod apk apkspecialist
-carx drift racing 2 mod apk apksporty
-carx drift racing 2 mod apk apksplashy
-carx drift racing 2 mod apk apksnappy
-carx drift racing 2 mod apk apksavvy
-carx drift racing 2 mod apk apksassy
-
All cars and tracks unlocked
-
Another benefit of CarX Drift Racing 2 mod apk offline is that you don't have to wait or grind to unlock new cars and tracks. You will have access to all of them from the start. You can choose from over 80 cars, each with its own unique features and specifications. You can also race on over 30 tracks, each with its own challenges and scenery. You can explore different locations such as Japan, Dubai, San Francisco, Moscow, and more.
-
Realistic physics and graphicsRealistic physics and graphics
-
CarX Drift Racing 2 mod apk offline also boasts of realistic physics and graphics that make the game more immersive and enjoyable. You can feel the difference between different cars and surfaces, as well as the effects of speed, gravity, and inertia. You can also admire the stunning visuals and details of the cars, tracks, and environments. You can adjust the graphics settings to suit your device and preferences.
-
Multiplayer mode and online tournaments
-
Even though CarX Drift Racing 2 mod apk offline does not require an internet connection, you can still play with other players online if you want. You can join or create clubs, chat with other racers, and challenge them to duels or team battles. You can also participate in online tournaments and events, where you can compete with players from all over the world and win prizes and trophies. You can also show off your skills and style by uploading your replays and screenshots to the game's social media platforms.
-
Customization and tuning options
-
One of the most fun aspects of CarX Drift Racing 2 mod apk offline is that you can customize and tune your cars to your liking. You can change the color, design, decals, wheels, spoilers, and other parts of your cars. You can also adjust the engine, suspension, brakes, tires, and other parameters of your cars to improve their performance and handling. You can create your own unique style and personality with your cars.
-
How to download and install CarX Drift Racing 2 mod apk offline
-
If you are interested in downloading and installing CarX Drift Racing 2 mod apk offline on your device, here are the steps you need to follow:
-
Step 1: Download the mod apk file from a trusted source
-
The first thing you need to do is to find a reliable source that provides the mod apk file for CarX Drift Racing 2. There are many websites that offer this file, but not all of them are safe and secure. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from the internet. You can use Google or any other search engine to look for reviews, ratings, feedbacks, and comments from other users who have downloaded the file before. You can also check the file size, date, version, and compatibility with your device.
-
Step 2: Enable unknown sources on your device settings
-
The next thing you need to do is to enable unknown sources on your device settings. This is because CarX Drift Racing 2 mod apk offline is not available on the official app stores like Google Play Store or Apple App Store. Therefore, you need to allow your device to install apps from sources other than these app stores. To do this, you need to go to your device settings, then security or privacy settings, then find the option that says unknown sources or allow installation from unknown sources. You need to toggle this option on or check the box next to it.
-
Step 3: Install the mod apk file and launch the game
-
The final thing you need to do is to install the mod apk file and launch the game. To do this, you need to locate the downloaded file on your device storage, either using a file manager app or by going to your downloads folder. Then, you need to tap on the file and follow the instructions on the screen to install it. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You can now enjoy CarX Drift Racing 2 mod apk offline on your device!
-
Conclusion
-
CarX Drift Racing 2 mod apk offline is a great way to enjoy one of the best racing and drifting games on your device without spending any money or needing an internet connection. It has many features that make it superior to the original version, such as unlimited money and gold, all cars and tracks unlocked, realistic physics and graphics, multiplayer mode and online tournaments, customization and tuning options, and more. It is easy to download and install on your device if you follow the steps we have provided in this article.
-
If you are a racing and drifting enthusiast who wants to experience the thrill of driving powerful cars on various tracks and terrains, you should definitely try CarX Drift Racing 2 mod apk offline. It will give you hours of fun and excitement that will keep you hooked for a long time. So what are you waiting for? Download CarX Drift Racing 2 mod apk offline today and start drifting!
-
FAQs
-
Here
Here are some frequently asked questions about CarX Drift Racing 2 mod apk offline:
-
-
Is CarX Drift Racing 2 mod apk offline safe to use?
-
Yes, CarX Drift Racing 2 mod apk offline is safe to use as long as you download it from a trusted source and scan it with an antivirus app before installing it. However, you should always be careful when downloading and installing any mod apk files from the internet, as some of them may contain harmful or malicious content. You should also backup your data and uninstall the original version of the game before installing the mod apk file.
-
Does CarX Drift Racing 2 mod apk offline work on all devices?
-
CarX Drift Racing 2 mod apk offline works on most Android devices that have Android 4.1 or higher versions. However, some devices may not be compatible or may experience some issues or glitches while running the game. You should check the device requirements and compatibility before downloading and installing the mod apk file. You should also make sure that your device has enough storage space and battery life to run the game smoothly.
-
Can I play CarX Drift Racing 2 mod apk offline with my friends?
-
Yes, you can play CarX Drift Racing 2 mod apk offline with your friends if you have a Wi-Fi or mobile data connection. You can join or create clubs, chat with other racers, and challenge them to duels or team battles. You can also participate in online tournaments and events, where you can compete with players from all over the world and win prizes and trophies. However, if you don't have an internet connection, you can still play the game offline in single-player mode or against AI opponents.
-
How can I update CarX Drift Racing 2 mod apk offline?
-
CarX Drift Racing 2 mod apk offline does not update automatically like the original version of the game. You will have to manually download and install the latest version of the mod apk file from the same source you got it from. You should also check for updates regularly to enjoy new features, cars, tracks, and bug fixes. However, you should be aware that updating the mod apk file may erase your progress and data in the game, so you should backup your data before updating.
-
Where can I get more information about CarX Drift Racing 2 mod apk offline?
-
If you want to get more information about CarX Drift Racing 2 mod apk offline, you can visit the official website of the game, where you can find news, updates, tips, tricks, guides, videos, screenshots, and more. You can also join the official Facebook page or Twitter account of the game, where you can interact with other fans and developers. You can also check out some online forums or blogs that are dedicated to CarX Drift Racing 2 mod apk offline, where you can find more reviews, feedbacks, questions, answers, and discussions.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/College Romance Season 1 Episode 1 The First Step of a Crazy Love Adventure.md b/spaces/1phancelerku/anime-remove-background/College Romance Season 1 Episode 1 The First Step of a Crazy Love Adventure.md
deleted file mode 100644
index fb9117e51b69eeaae8ebd7c9775bd5ed86faf58c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/College Romance Season 1 Episode 1 The First Step of a Crazy Love Adventure.md
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
How to Download College Romance Season 1 Episode 1 for Free
-
If you are looking for a fun and relatable web series that captures the essence of college life, you should definitely check out College Romance. This is a popular Indian comedy-drama series that follows the adventures and misadventures of three friends, Naira, Trippy, and Karan, as they navigate their #YaarPyaarAurBakchodi (Friendship, Love, and Nonsense) in college. The series is produced by The Viral Fever (TVF) and has two seasons so far, with the first one released in 2018 and the second one in 2020.
-
In this article, we will show you how to download College Romance season 1 episode 1 for free, so you can enjoy this hilarious and heartwarming show at your convenience. We will also give you a sneak peek of what to expect from the episode, as well as some other ways to enjoy it. So, without further ado, let's get started!
Step 1: Find a reliable streaming platform that offers College Romance season 1 episode 1
-
The first step to download College Romance season 1 episode 1 is to find a trustworthy and legal streaming platform that offers it. There are many options available online, but not all of them are safe or legitimate. Some may contain viruses, malware, or phishing links that can harm your device or compromise your personal information. Others may have poor video quality, annoying ads, or limited content.
-
Therefore, we recommend you to use one of the following platforms that have proven to be reliable and user-friendly:
-
-
Sony Liv: This is an Indian video-on-demand service that has a wide range of content, including movies, TV shows, sports, news, and original web series. You can watch College Romance season 1 episode 1 on Sony Liv with a premium subscription that costs Rs.299 per month or Rs.999 per year. You can also get a free trial for seven days if you are a new user.
-
TVF Play: This is the official website of The Viral Fever, where you can watch all their original web series for free with ads. You can also download their app on your Android or iOS device and enjoy their content offline. You can watch College Romance season 1 episode 1 on TVF Play without any registration or payment.
-
-
Step 2: Choose a suitable subscription plan or sign up for a free trial
-
The next step to download College Romance season 1 episode 1 is to choose a suitable subscription plan or sign up for a free trial on the platform of your choice. If you opt for Sony Liv, you will need to create an account with your email address or phone number and select a payment method. You can pay with your credit card, debit card, net banking, UPI, or wallet. You will then get access to all their premium content, including College Romance season 1 episode 1.
-
If you opt for TVF Play, you don't need to pay anything or register anything. You can simply visit their website or download their app and browse their web series category. You will find College Romance season 1 episode 1 under the comedy genre.
-
Step 3: Download the episode to your device or watch it online
-
The final step to download College Romance season 1 episode 1 is to download the episode to your device or watch it online. If you are using Sony Liv, you can download the episode by clicking on the download icon on the bottom right corner of the video player. You can choose the video quality and the download location. You can also watch the episode online by clicking on the play button.
-
If you are using TVF Play, you can download the episode by tapping on the download icon on the top right corner of the video player. You can choose the video quality and the download location. You can also watch the episode online by tapping on the play button.
-
Once you have downloaded or watched College Romance season 1 episode 1, you can enjoy this hilarious and heartwarming show at your convenience. You can also share it with your friends and family and have a good laugh together.
-
What to Expect from College Romance Season 1 Episode 1
-
Now that you know how to download College Romance season 1 episode 1, you might be wondering what to expect from it. Well, here are some of the things that you can look forward to in this episode:
-
How to watch college romance season 1 episode 1 online for free
-College romance season 1 episode 1 recap and review
-College romance season 1 episode 1 streaming on Sony Liv and TVF Play
-College romance season 1 episode 1 cast and characters
-College romance season 1 episode 1 subtitles and dubbed versions
-College romance season 1 episode 1 download in HD quality
-College romance season 1 episode 1 plot and summary
-College romance season 1 episode 1 trailer and teaser
-College romance season 1 episode 1 ratings and reviews
-College romance season 1 episode 1 behind the scenes and bloopers
-College romance season 1 episode 1 best moments and scenes
-College romance season 1 episode 1 memes and fan reactions
-College romance season 1 episode 1 spoilers and predictions
-College romance season 1 episode 1 watch party and discussion
-College romance season 1 episode 1 trivia and facts
-College romance season 1 episode 1 music and soundtrack
-College romance season 1 episode 1 quotes and dialogues
-College romance season 1 episode 1 analysis and commentary
-College romance season 1 episode 1 comparison and contrast with other shows
-College romance season 1 episode 1 awards and nominations
-College romance season 1 episode 1 merchandise and products
-College romance season 1 episode 1 fan art and fan fiction
-College romance season 1 episode 1 interviews and podcasts
-College romance season 1 episode 1 news and updates
-College romance season 1 episode 1 release date and time
-College romance season 1 episode 2 preview and sneak peek
-Where to download college romance season 1 full episodes
-How to download college romance season 1 without ads or viruses
-How to download college romance season 1 with subtitles or audio options
-How to download college romance season 1 on different devices or platforms
-How to download college romance season 2 when it comes out
-How to download college romance web series all seasons and episodes
-How to download college romance web series in different languages or formats
-How to download college romance web series legally and ethically
-How to download college romance web series for free or cheap
-Why you should watch college romance web series if you haven't yet
-What you need to know before watching college romance web series
-What you can learn from watching college romance web series
-What you can expect from watching college romance web series
-What you can do after watching college romance web series
-
Synopsis: A brief summary of the plot and the main characters
-
The first episode of College Romance season 1 introduces us to the three main characters of the show: Naira, Trippy, and Karan. Naira is a smart and confident girl who is looking for love in college. Trippy is a fun-loving and adventurous guy who is always ready for a challenge. Karan is a shy and sweet guy who is afraid of girls and rejection.
-
The episode follows their first day in college, where they meet new people, make new friends, and face new situations. Naira meets Bagga, a senior who tries to impress her with his cheesy lines and fake stories. Trippy meets Raveena, a junior who challenges him to a bike race. Karan meets Deepika, a cute girl who likes him but he doesn't know how to talk to her.
-
The episode also shows how Naira, Trippy, and Karan help each other out with their problems and support each other as friends. They share their experiences, give advice, and have fun together.
-
Highlights: Some of the best scenes and moments from the episode
-
Some of the best scenes and moments from College Romance season 1 episode 1 are:
-
-
The opening scene where Naira, Trippy, and Karan are getting ready for college and talking to each other on phone.
-
The scene where Bagga tries to flirt with Naira and she shuts him down with her witty replies.
-
The scene where Trippy accepts Raveena's challenge and races with her on his bike.
-
The scene where Karan gets nervous around Deepika and spills coffee on her.
-
The scene where Naira, Trippy, and Karan meet at the canteen and share their stories.
-
The scene where Naira tells Trippy to go after Raveena and Karan tells Naira to go after Bagga.
-
The scene where Trippy kisses Raveena and Naira slaps Bagga.
-
The scene where Karan gets a text from Deepika asking him out.
-
The ending scene where Naira, Trippy, and Karan hug each other and celebrate their first day in college.
-
Reviews: What critics and viewers have said about the episode
-
College Romance season 1 episode 1 has received positive reviews from both critics and viewers. Here are some of the comments and ratings that the episode has received:
-
-
-
Critic/Viewer
-
Comment
-
Rating
-
-
-
Rajeev Masand, CNN-News18
-
"College Romance is a refreshing and realistic take on the joys and sorrows of college life. The first episode sets the tone for the series with its witty dialogues, relatable characters, and hilarious situations. The chemistry between the three leads is palpable and their friendship is heartwarming. The episode also touches upon some important issues like peer pressure, consent, and self-esteem."
-
4/5
-
-
-
Shreya Thakur, Film Companion
-
"College Romance is a fun and breezy web series that will make you nostalgic for your college days. The first episode introduces us to the three protagonists who are endearing and entertaining. The episode has a good balance of comedy and drama, and keeps you hooked till the end. The episode also has some memorable scenes and moments that will make you laugh out loud."
-
3.5/5
-
-
-
Rohan Sharma, IMDb user
-
"College Romance is one of the best web series I have ever watched. The first episode is awesome and hilarious. The actors are amazing and they have done a great job. The story is very realistic and relatable. The episode has everything that a college student can relate to: friendship, love, nonsense, and fun. I loved it."
-
10/10
-
-
-
Neha Singh, YouTube user
-
"College Romance is a super cool web series that I totally recommend to everyone. The first episode is very funny and cute. The actors are very good and they have a lot of chemistry. The story is very interesting and engaging. The episode has a lot of funny scenes and dialogues that will make you laugh so hard. I enjoyed it a lot."
-
Liked
-
-
Other Ways to Enjoy College Romance Season 1 Episode 1
-
If you are not satisfied with the streaming platforms that we have mentioned above, or if you want to explore other ways to enjoy College Romance season 1 episode 1, here are some alternatives and tips that you can try:
-
Alternatives: Other platforms or sources that offer College Romance season 1 episode 1
-
Some of the other platforms or sources that offer College Romance season 1 episode 1 are:
-
-
MX Player: This is another Indian video-on-demand service that has a large collection of content, including movies, TV shows, web series, music, and games. You can watch College Romance season 1 episode 1 on MX Player for free with ads. You can also download the episode to your device or watch it online.
-
YouTube: This is the most popular video-sharing platform in the world, where you can find almost anything that you are looking for. You can watch College Romance season 1 episode 1 on YouTube for free with ads. You can also download the episode to your device or watch it online.
-
Torrent: This is a peer-to-peer file-sharing network that allows users to download and share files over the internet. You can download College Romance season 1 episode 1 from torrent sites for free without ads. However, this method is illegal and risky, as you may violate the copyright laws and expose your device to viruses, malware, or hackers.
-
-
Tips: How to enhance your viewing experience and avoid spoilers
-
Some of the tips that can help you enhance your viewing experience and avoid spoilers are:
-
-
Use headphones or speakers: To enjoy the sound effects and the dialogues of College Romance season 1 episode 1, you should use headphones or speakers instead of your device's built-in speakers. This will give you a better audio quality and a more immersive experience.
-
Watch it with friends: To make your viewing experience more fun and interactive, you should watch College Romance season 1 episode 1 with your friends. You can share your opinions, reactions, and jokes with them and have a good time together.
-
Avoid social media: To avoid spoilers and unwanted information about College Romance season 1 episode 1, you should avoid social media platforms like Facebook, Twitter, Instagram, etc. until you have watched the episode. You may come across posts, comments, or memes that reveal important details or twists about the episode that may ruin your enjoyment.
-
-
Conclusion
-
In conclusion, College Romance season 1 episode 1 is a great web series that you should not miss if you love comedy and drama. It is a realistic and relatable show that depicts the life of three college friends who are looking for love and fun. It has a lot of humor, romance, and emotions that will keep you entertained and engaged.
-
To download College Romance season 1 episode 1 for free, you can use one of the reliable streaming platforms that we have suggested above, such as Sony Liv or TVF Play. You can also try other alternatives or tips that we have mentioned above, but be careful of the risks and consequences involved.
-
We hope that this article has helped you with downloading College Romance season 1 episode 1 for free and enjoying it to the fullest. If you have any questions or feedback, please feel free to leave them in the comments section below. We would love to hear from you!
-
Thank you for reading and happy watching!
-
FAQs
-
Here are some of the frequently asked questions about College Romance season 1 episode 1:
-
-
How many episodes are there in College Romance season 1?
-
There are five episodes in College Romance season 1, each with a duration of around 20 minutes.
-
Who are the actors in College Romance season 1?
-
The actors in College Romance season 1 are:
-
-
Apoorva Arora as Naira
-
Gagan Arora as Trippy
-
Keshav Sadhna as Karan
-
Hira Ashar as Raveena
-
Shreya Mehta as Deepika
-
Sahil Verma as Bagga
-
-
Where can I watch College Romance season 2?
-
You can watch College Romance season 2 on Sony Liv or TVF Play with a premium subscription or a free trial. You can also watch it on YouTube or MX Player for free with ads.
-
Is College Romance based on a true story?
-
No, College Romance is not based on a true story. It is a fictional web series that is inspired by the common experiences and challenges that college students face in India.
-
Is College Romance suitable for all ages?
-
No, College Romance is not suitable for all ages. It is rated 16+ by Sony Liv and TVF Play, as it contains some mature themes, language, and scenes that may not be appropriate for younger viewers.
-
Will there be a College Romance season 3?
-
As of now, there is no official confirmation or announcement about College Romance season 3. However, given the popularity and success of the series, there is a high possibility that it will be renewed for another season. We will update you as soon as we get any news or information about it.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Fid Q Songs The Best of Tanzanian Hip Hop.md b/spaces/1phancelerku/anime-remove-background/Download Fid Q Songs The Best of Tanzanian Hip Hop.md
deleted file mode 100644
index 63d6761188216692a12675bafbeed878a9cecd85..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Fid Q Songs The Best of Tanzanian Hip Hop.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Download Fid Q Songs: How to Enjoy the Best of Bongo Hip Hop
-
If you are a fan of Bongo Hip Hop, you have probably heard of Fid Q, one of the most talented and influential artists in the genre. Fid Q, also known as Cheusidawa, has been making waves in the Tanzanian music scene since the early 2000s, with his sharp lyricism, unique flow, and social commentary. He has collaborated with many other artists, such as Rich Mavoko, Darassa, Alikiba, and more, and has won several awards and accolades for his work. In this article, we will show you how to download Fid Q songs, so you can enjoy his music anytime, anywhere.
Fid Q was born as Fareed Kubanda in Mwanza, Tanzania, in 1980. He grew up listening to hip hop music from the US, especially artists like Nas, Tupac, Biggie, and Jay-Z. He started rapping at a young age, and formed a group called Wakilisha with his friends. He moved to Dar es Salaam in 2001, where he met producer P-Funk Majani, who signed him to his label Bongo Records. He released his first solo album, Vina Mwanzo Kati na Mwisho, in 2004, which featured the hit single "Ukweli na Uwazi". He followed it up with another album, Propaganda, in 2009, which had songs like "Bongo Hip Hop", "Mwanza Mwanza", and "Si Kupenda Kwangu". His third album, KitaaOLOJIA, came out in 2017, and included tracks like "Fresh", "Sumu", and "Tawile". He is currently working on his fourth album, Cheusidawa.
-
His style and influence
-
Fid Q is known for his witty wordplay, clever metaphors, and deep messages. He often raps about social issues, such as poverty, corruption, education, and patriotism. He also incorporates elements of traditional Tanzanian music and culture into his songs, such as Swahili proverbs, local slang, and historical references. He is widely regarded as one of the pioneers and leaders of Bongo Hip Hop, a subgenre of hip hop that emerged in Tanzania in the late 1990s. He has inspired many other artists in the scene, such as Joh Makini, Nikki Mbishi, Roma Mkatoliki, and more.
-
His awards and achievements
-
Fid Q has received many accolades for his music over the years. Some of them are:
-
download fid q tawile mp3
-download fid q bongo hiphop video
-download fid q best of compilation
-download fid q ft rich mavoko tawile
-download fid q bongo hiphop lyrics
-download fid q latest songs 2023
-download fid q cheusidawa album
-download fid q bongo hiphop remix
-download fid q slide digital playlist
-download fid q mavoko tawile official video
-download fid q bongo hiphop mp4
-download fid q new song 2023
-download fid q cheusidawa tv channel
-download fid q bongo hiphop instrumental
-download fid q slide digital youtube
-download fid q mavoko tawile audio
-download fid q bongo hiphop song
-download fid q old songs mp3
-download fid q cheusidawa entertainment
-download fid q bongo hiphop live performance
-download fid q slide digital instagram
-download fid q mavoko tawile lyrics
-download fid q bongo hiphop itunes
-download fid q popular songs 2022
-download fid q cheusidawa music video
-download fid q bongo hiphop facebook
-download fid q slide digital music
-download fid q mavoko tawile song
-download fid q bongo hiphop youtube channel
-download fid q best songs 2021
-download fid q cheusidawa official video
-download fid q bongo hiphop spotify
-download fid q slide digital tz website
-download fid q mavoko tawile mp4
-download fid q bongo hiphop online stream
-download fid q top songs 2020
-download fid q cheusidawa youtube playlist
-download fid q bongo hiphop soundcloud
-download fid q slide digital twitter
-download fid q mavoko tawile remix
-download fid q bongo hiphop free mp3
-download fid q hit songs 2019
-download fid q cheusidawa mp3 song
-download fid q bongo hiphop apple music
-download fid q slide digital facebook page
-download fid q mavoko tawile instrumental
-download fid q bongo hiphop ringtone
-download fid q classic songs 2018
-download fid q cheusidawa full album
-
-
Kilimanjaro Music Awards for Best Hip Hop Artist (2005)
-
Tanzania Music Awards for Best Hip Hop Album (Propaganda) (2010)
-
Tanzania Music Awards for Best Male Artist (2018)
-
Tanzania People's Choice Awards for Best Male Artist (2018)
-
Afrimma Awards for Best Rap Act (East Africa) (2018)
-
-
Why download Fid Q songs?
-
The benefits of downloading music
-
Downloading music is a great way to enjoy your favorite songs without relying on internet connection or streaming services. Some of the benefits of downloading music are:
-
-
You can listen to your music offline, which saves you data and battery.
-
You can create your own playlists and organize your music library according to your preferences.
-
You can transfer your music to other devices, such as your phone, tablet, or laptop.
-
You can support your favorite artists by buying their music or downloading it legally.
-
-
The reasons to love Fid Q's music
-
Fid Q's music is not only entertaining, but also educational, inspirational, and motivational. Some of the reasons to love his music are:
-
-
He raps with skill and passion, delivering his bars with clarity and confidence.
-
He tells stories and expresses his opinions, making his songs relatable and meaningful.
-
He blends different genres and styles, making his songs diverse and versatile.
-
He collaborates with other artists, making his songs dynamic and collaborative.
-
He represents his culture and identity, making his songs authentic and original.
-
-
The best platforms to download Fid Q songs
-
There are many platforms where you can download Fid Q songs, but some of the best ones are:
-
-
Boomplay: This is a popular music streaming and downloading app in Africa, where you can find Fid Q's albums and singles. You can also access other features, such as lyrics, videos, podcasts, and more.
-
Mdundo: This is another leading music platform in Africa, where you can download Fid Q's songs for free. You can also discover new music, create playlists, and share your favorites with others.
-
iTunes: This is a well-known music store and player, where you can buy and download Fid Q's songs. You can also sync your music with your Apple devices and enjoy other benefits, such as iCloud Music Library, Apple Music, and more.
-
How to download Fid Q songs?
-
The steps to follow
-
Downloading Fid Q songs is easy and fast, if you follow these simple steps:
-
-
Choose the platform that you want to use, such as Boomplay, Mdundo, or iTunes.
-
Search for Fid Q's name or the song that you want to download.
-
Select the song and click on the download button or icon.
-
Wait for the download to complete and enjoy your music.
-
-
The tips and tricks to optimize your experience
-
To make the most out of your music downloading experience, here are some tips and tricks that you can use:
-
-
Check the quality and size of the song before downloading it, to ensure that it meets your expectations and device capacity.
-
Use a reliable and secure internet connection, to avoid interruptions and errors during the download process.
-
Use a good music player, to enhance the sound and performance of your music.
-
Update your music library regularly, to keep track of your downloads and discover new songs.
-
-
The challenges and solutions to downloading Fid Q songs
-
Downloading Fid Q songs may not always be smooth and easy, as you may encounter some challenges along the way. Some of them are:
-
-
Limited access: Some platforms may not be available in your region or device, or may require a subscription or payment to download Fid Q songs. To solve this, you can use a VPN service, a proxy server, or an alternative platform that offers free or affordable downloads.
-
Legal issues: Some platforms may not have the rights or permission to distribute Fid Q songs, or may violate the intellectual property laws of the artist or the label. To solve this, you can use a platform that has a license or agreement with Fid Q or his management, or respect his terms and conditions of use.
-
Technical problems: Some platforms may have bugs, glitches, or errors that prevent you from downloading Fid Q songs, or may damage your device or data. To solve this, you can use a platform that has a good reputation, a high rating, and a positive feedback from other users, or contact their customer support for assistance.
-
Conclusion
-
Summary of the main points
-
In this article, we have learned how to download Fid Q songs, so we can enjoy the best of Bongo Hip Hop. We have also learned more about Fid Q, his background, his style, and his achievements. We have explored the benefits of downloading music, the reasons to love Fid Q's music, and the best platforms to download his songs. We have also shared the steps to follow, the tips and tricks to optimize our experience, and the challenges and solutions to downloading his songs.
-
Call to action and recommendation
-
Now that you know how to download Fid Q songs, what are you waiting for? Go ahead and download your favorite songs from his albums and singles, and enjoy his music on your device. You can also share his music with your friends and family, and support him on his social media platforms. If you like Fid Q's music, you may also like other Bongo Hip Hop artists, such as Professor Jay, G Nako, Young Killer, and more. You can find their songs on the same platforms that we have mentioned above. Thank you for reading this article, and we hope you have a great time listening to Fid Q's music.
-
FAQs
-
Q: How can I contact Fid Q?
-
A: You can contact Fid Q through his official email address (fidqcheusidawa@gmail.com), his Instagram account (@fidqcheusidawa), his Twitter account (@fidqcheusidawa), or his Facebook page (Fid Q).
-
Q: How can I buy Fid Q's merchandise?
-
A: You can buy Fid Q's merchandise, such as T-shirts, caps, hoodies, and more, from his online store (https://fidqstore.com/). You can also find his merchandise at some physical stores in Tanzania.
-
Q: How can I watch Fid Q's videos?
-
A: You can watch Fid Q's videos on his YouTube channel (https://www.youtube.com/user/fidqcheusidawa), where he uploads his official music videos, behind the scenes footage, interviews, and more.
-
Q: How can I support Fid Q's projects?
-
A: You can support Fid Q's projects by buying his music, streaming his songs, downloading his songs legally, sharing his music with others, following him on social media, subscribing to his YouTube channel, buying his merchandise, attending his shows, and giving him feedback.
-
Q: How can I learn more about Bongo Hip Hop?
-
A: You can learn more about Bongo Hip Hop by listening to more artists in the genre, reading articles and blogs about it, watching documentaries and shows about it, joining online forums and groups about it, and visiting Tanzania and experiencing it firsthand.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/ForYou Pakistan - TikTok The Ultimate App for Viral Content Creators.md b/spaces/1phancelerku/anime-remove-background/ForYou Pakistan - TikTok The Ultimate App for Viral Content Creators.md
deleted file mode 100644
index 28d41fc4200abc3deb2bff8933d590e013d31d67..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/ForYou Pakistan - TikTok The Ultimate App for Viral Content Creators.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
Pakistan TikTok APK: What You Need to Know
-
TikTok is one of the most popular social media platforms in the world, with over one billion users. However, in Pakistan, the app has faced some difficulties due to its content and regulations. In this article, we will explain what TikTok is, why it is banned in Pakistan, what are the alternatives, and how to download TikTok APK for Android devices.
TikTok is a video-sharing app that allows users to create and share short-form videos on any topic. Users can add music, effects, filters, stickers, voiceovers, and more to their videos. They can also watch videos from other users, follow their favorite creators, comment, like, and share. TikTok has a variety of categories and genres, such as comedy, gaming, DIY, food, sports, memes, pets, and more.
-
TikTok has several features and benefits that make it entertaining, creative, and engaging. Some of these features are:
-
-
A personalized video feed based on what you watch, like, and share
-
An endless stream of short videos that are exciting, spontaneous, and genuine
-
A global community of creators that showcase their incredible skills and everyday life
-
A platform that encourages innovation and expression
-
An easy-to-use interface and editing tools
-
A huge library of music clips and sounds
-
A way to reuse content from other videos by remixing or adding your own touch
-
-
Why is TikTok banned in Pakistan and what are the alternatives?
-
TikTok has been banned in Pakistan multiple times due to complaints about immoral and indecent content. The Pakistan Telecommunication Authority (PTA) has issued orders to block access to the app after receiving petitions from different segments of society. The PTA has also said that TikTok has not complied with its requests to moderate unlawful content according to local laws.
-
TikTok users in Pakistan can use other apps that offer similar or different features as alternatives. Some of these apps are:
-
-
Instagram Reels: A feature within Instagram that lets users create short videos with music and effects. Users can also discover reels from other users on the Explore tab.
-
Triller: An app similar to TikTok that allows users to create short videos with music and filters. Users can also collaborate with other creators and join challenges.
-
YouTube Shorts: A feature within YouTube that lets users create short vertical videos with music and effects. Users can also browse shorts from other users on the Shorts tab.
Continuing the article:
Chingari: An app similar to TikTok that allows users to create short videos with music and filters. Users can also watch videos from different categories, such as comedy, news, sports, and more .
-
Dubsmash: An app similar to TikTok that allows users to create short videos with audio clips from famous songs, movie scenes, quotes, and more. Users can also watch videos from other users and chat with them .
-
-
How to download TikTok APK for Android devices?
-
TikTok APK is a file that allows users to install the app on their Android devices without using the Google Play Store. This can be useful for users who cannot access the app from the official store or want to use an older or modified version of the app.
-
pakistan tiktok app download
-pakistan tiktok ban
-pakistan tiktok star
-pakistan tiktok video
-pakistan tiktok lite apk
-pakistan tiktok alternative
-pakistan tiktok famous
-pakistan tiktok news
-pakistan tiktok comedy
-pakistan tiktok challenge
-pakistan tiktok foryou apk
-pakistan tiktok unban
-pakistan tiktok girl
-pakistan tiktok song
-pakistan tiktok mod apk
-pakistan tiktok viral
-pakistan tiktok drama
-pakistan tiktok dance
-pakistan tiktok pro apk
-pakistan tiktok funny
-pakistan tiktok latest version apk
-pakistan tiktok update
-pakistan tiktok boy
-pakistan tiktok status
-pakistan tiktok premium apk
-pakistan tiktok trend
-pakistan tiktok prank
-pakistan tiktok duet
-pakistan tiktok hack apk
-pakistan tiktok meme
-pakistan tiktok old version apk
-pakistan tiktok review
-pakistan tiktok couple
-pakistan tiktok poetry
-pakistan tiktok plus apk
-pakistan tiktok reaction
-pakistan tiktok roast
-pakistan tiktok slowmo
-pakistan tiktok adfree apk
-pakistan tiktok talent
-pakistan tiktok original apk
-pakistan tiktok rating
-pakistan tiktok family
-pakistan tiktok naat
-pakistan tiktok downloader apk
-pakistan tiktok earnings
-pakistan tiktok wedding
-pakistan tiktok voiceover
-pakistan tiktok no watermark apk
-
Users can download TikTok APK from various sources, such as APKPure, Uptodown, or WizCase. However, users should be careful and only download the APK files from trusted and verified sources, as some files may contain malware or viruses that can harm their devices. Users should also enable the option to install apps from unknown sources in their device settings before installing the APK files.
-
Here are the steps to download TikTok APK from APKPure:
Click on the green Download APK button and wait for the file to be downloaded.
-
Open the file manager on your device and locate the downloaded file.
-
Tap on the file and follow the instructions to install the app.
-
Enjoy TikTok on your device.
-
-
Conclusion
-
TikTok is a fun and popular app that has faced some challenges in Pakistan due to its content. Users can still enjoy TikTok or its alternatives by downloading the APK files from reliable sources. However, users should be aware of the risks and responsibilities of using these apps and respect the local laws and norms.
-
FAQs
-
What are the advantages and disadvantages of TikTok?
-
TikTok has many advantages, such as:
-
-
It is a platform for creativity and expression
-
It is a source of entertainment and education
-
It is a way to connect with people and cultures
-
It is a tool for marketing and promotion
-
-
TikTok also has some disadvantages, such as:
-
-
It can be addictive and time-consuming
-
It can expose users to inappropriate or harmful content
-
It can violate users' privacy and security
-
It can cause legal or ethical issues
-
-
What does TikTok mean and where did it come from?
-
TikTok is a combination of two words: "tick" and "tock", which are the sounds of a clock. The name suggests that the app is about capturing moments in time. TikTok was launched in 2016 by ByteDance, a Chinese internet company. It was originally called Douyin in China, but was rebranded as TikTok for the international market in 2017. In 2018, TikTok merged with Musical.ly, another popular video-sharing app.
-
How can I watch TikTok videos without downloading the app?
-
You can watch TikTok videos without downloading the app by using a web browser. You can go to https://www.tiktok.com/ and browse through different categories and hashtags. You can also search for specific users or videos by using the search bar. However, you will not be able to create or upload videos, comment, like, or share without an account or the app.
-
How can I make a successful video on TikTok?
-
To make a successful video on TikTok, you should follow some tips, such as:
-
-
Pick a niche or theme that suits your personality and interests
-
Use catchy music, effects, filters, and stickers to enhance your video
-
Add relevant hashtags, captions, and keywords to your video
-
Follow the trends and challenges on TikTok and join them
-
Collaborate with other creators and influencers on TikTok
-
Engage with your audience and respond to their comments
Continuing the article:
Post regularly and at the best times for your audience
-
Analyze your performance and improve your strategy
-
-
How can I use TikTok for business promotion?
-
TikTok can be a powerful tool for business promotion, as it can help you reach a large and diverse audience, increase your brand awareness, showcase your products or services, and drive traffic to your website or store. To use TikTok for business promotion, you should follow some steps, such as:
-
-
Create a business account on TikTok and optimize your profile
-
Define your target audience and goals
-
Create engaging and relevant content that showcases your brand personality and value proposition
-
Use hashtags, keywords, and calls to action to increase your visibility and conversions
-
Partner with influencers or celebrities that match your brand image and audience
-
Run paid ads or sponsored campaigns on TikTok to reach more potential customers
-
Measure your results and adjust your strategy accordingly
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/encoder/__init__.py b/spaces/232labs/VToonify/vtoonify/model/encoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/4Taps/SadTalker/src/audio2pose_models/audio_encoder.py b/spaces/4Taps/SadTalker/src/audio2pose_models/audio_encoder.py
deleted file mode 100644
index 0ce036df119f86ef28c3ac8d6c834264571c309a..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/audio2pose_models/audio_encoder.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-class Conv2d(nn.Module):
- def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.conv_block = nn.Sequential(
- nn.Conv2d(cin, cout, kernel_size, stride, padding),
- nn.BatchNorm2d(cout)
- )
- self.act = nn.ReLU()
- self.residual = residual
-
- def forward(self, x):
- out = self.conv_block(x)
- if self.residual:
- out += x
- return self.act(out)
-
-class AudioEncoder(nn.Module):
- def __init__(self, wav2lip_checkpoint):
- super(AudioEncoder, self).__init__()
-
- self.audio_encoder = nn.Sequential(
- Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(256, 512, kernel_size=3, stride=1, padding=0),
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
-
- #### load the pre-trained audio_encoder\
- wav2lip_state_dict = torch.load(wav2lip_checkpoint)['state_dict']
- state_dict = self.audio_encoder.state_dict()
-
- for k,v in wav2lip_state_dict.items():
- if 'audio_encoder' in k:
- state_dict[k.replace('module.audio_encoder.', '')] = v
- self.audio_encoder.load_state_dict(state_dict)
-
-
- def forward(self, audio_sequences):
- # audio_sequences = (B, T, 1, 80, 16)
- B = audio_sequences.size(0)
-
- audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)
-
- audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1
- dim = audio_embedding.shape[1]
- audio_embedding = audio_embedding.reshape((B, -1, dim, 1, 1))
-
- return audio_embedding.squeeze(-1).squeeze(-1) #B seq_len+1 512
diff --git a/spaces/52Hz/SRMNet_real_world_denoising/main_test_SRMNet.py b/spaces/52Hz/SRMNet_real_world_denoising/main_test_SRMNet.py
deleted file mode 100644
index ea61bf3053ec4188500c57a416e844780abf92df..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SRMNet_real_world_denoising/main_test_SRMNet.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import argparse
-import cv2
-import glob
-import numpy as np
-from collections import OrderedDict
-from skimage import img_as_ubyte
-import os
-import torch
-import requests
-from PIL import Image
-import torchvision.transforms.functional as TF
-import torch.nn.functional as F
-from natsort import natsorted
-from model.SRMNet import SRMNet
-
-def main():
- parser = argparse.ArgumentParser(description='Demo Image Denoising')
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
- parser.add_argument('--result_dir', default='result/', type=str, help='Directory for results')
- parser.add_argument('--weights',
- default='experiments/pretrained_models/real_denoising_SRMNet.pth', type=str,
- help='Path to weights')
-
- args = parser.parse_args()
-
- inp_dir = args.input_dir
- out_dir = args.result_dir
-
- os.makedirs(out_dir, exist_ok=True)
-
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
-
- if len(files) == 0:
- raise Exception(f"No files found at {inp_dir}")
-
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- # Load corresponding models architecture and weights
- model = SRMNet()
- model = model.to(device)
- model.eval()
- load_checkpoint(model, args.weights)
-
-
- mul = 16
- for file_ in files:
- img = Image.open(file_).convert('RGB')
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
-
- # Pad the input if not_multiple_of 8
- h, w = input_.shape[2], input_.shape[3]
- H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul
- padh = H - h if h % mul != 0 else 0
- padw = W - w if w % mul != 0 else 0
- input_ = F.pad(input_, (0, padw, 0, padh), 'reflect')
- with torch.no_grad():
- restored = model(input_)
-
- restored = torch.clamp(restored, 0, 1)
- restored = restored[:, :, :h, :w]
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
- restored = img_as_ubyte(restored[0])
-
- f = os.path.splitext(os.path.split(file_)[-1])[0]
- save_img((os.path.join(out_dir, f + '.png')), restored)
-
-
-def save_img(filepath, img):
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
-
-
-def load_checkpoint(model, weights):
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
- try:
- model.load_state_dict(checkpoint["state_dict"])
- except:
- state_dict = checkpoint["state_dict"]
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- name = k[7:] # remove `module.`
- new_state_dict[name] = v
- model.load_state_dict(new_state_dict)
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/components/chat-message.tsx b/spaces/7hao/bingo/src/components/chat-message.tsx
deleted file mode 100644
index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/components/chat-message.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-import remarkGfm from 'remark-gfm'
-import remarkMath from 'remark-math'
-import supersub from 'remark-supersub'
-import remarkBreaks from 'remark-breaks'
-import { cn } from '@/lib/utils'
-import { CodeBlock } from '@/components/ui/codeblock'
-import { MemoizedReactMarkdown } from '@/components/markdown'
-import { LearnMore } from './learn-more'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { useEffect } from 'react'
-import { TurnCounter } from './turn-counter'
-
-export interface ChatMessageProps {
- message: ChatMessageModel
-}
-
-export function ChatMessage({ message, ...props }: ChatMessageProps) {
- useEffect(() => {
- if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) {
- window.scrollBy(0, 200)
- }
- }, [message.text])
-
- return message.text ? (
-
-
-
-- Nouvelle interface (modifier l'option LAYOUT de `config.py` pour passer d'une disposition ``gauche-droite`` à une disposition ``haut-bas``)
-
-
-
- Tous les boutons sont générés dynamiquement en lisant functional.py et peuvent être facilement personnalisés pour ajouter des fonctionnalités personnalisées, ce qui facilite l'utilisation du presse-papiers.
-
-
-
-
-- Correction d'erreurs/lissage du texte.
-
-
-
-
-- Si la sortie contient des équations, elles sont affichées à la fois sous forme de tex et sous forme rendue pour faciliter la lecture et la copie.
-
-
-
-
-- Pas envie de lire les codes de ce projet? Tout le projet est directement exposé par ChatGPT.
-
-
-
-
-- Appel à une variété de modèles de langage de grande envergure (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/)-GPT4).
-
-
-
-
----
-# Installation
-## Installation-Method 1: running directly (Windows, Linux or MacOS)
-
-1. Télécharger le projet
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configuration de la clé API
-
-Dans `config.py`, configurez la clé API et d'autres paramètres. Consultez [Special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. Lorsque le programme est exécuté, il vérifie en premier s'il existe un fichier de configuration privé nommé `config_private.py` et remplace les paramètres portant le même nom dans `config.py` par les paramètres correspondants dans `config_private.py`. Par conséquent, si vous comprenez la logique de lecture de nos configurations, nous vous recommandons vivement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de `config.py`. `config_private.py` n'est pas contrôlé par Git et peut garantir la sécurité de vos informations privées. P.S. Le projet prend également en charge la configuration de la plupart des options via "variables d'environnement", le format d'écriture des variables d'environnement est référencé dans le fichier `docker-compose`. Priorité de lecture: "variables d'environnement" > `config_private.py` > `config.py`)
-
-
-3. Installer les dépendances
-```sh
-# (Option I: python users instalation) (Python version 3.9 or higher, the newer the better). Note: use official pip source or ali pip source. To temporarily change the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: non-python users instalation) Use Anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create anaconda env
-conda activate gptac_venv # Activate anaconda env
-python -m pip install -r requirements.txt # Same step as pip instalation
-```
-
-Cliquez ici pour afficher le texte si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend.
-
-
-【Optional】 Si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend, des dépendances supplémentaires doivent être installées (prérequis: compétent en Python + utilisez Pytorch + configuration suffisante de l'ordinateur):
-```sh
-# 【Optional Step I】 Support THU ChatGLM. Remarque sur THU ChatGLM: Si vous rencontrez l'erreur "Appel à ChatGLM échoué, les paramètres ChatGLM ne peuvent pas être chargés normalement", reportez-vous à ce qui suit: 1: La version par défaut installée est torch+cpu, si vous souhaitez utiliser cuda, vous devez désinstaller torch et réinstaller torch+cuda; 2: Si le modèle ne peut pas être chargé en raison d'une configuration insuffisante de l'ordinateur local, vous pouvez modifier la précision du modèle dans request_llm/bridge_chatglm.py, modifier AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) par AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# 【Optional Step II】 Support FDU MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When running this line of code, you must be in the project root path.
-
-# 【Optional Step III】Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the desired model. Currently, all models supported are as follows (the jittorllms series currently only supports the docker scheme):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Exécution
-```sh
-python main.py
-```5. Plugin de fonction de test
-```
-- Fonction de modèle de plugin de test (requiert que GPT réponde à ce qui s'est passé dans l'histoire aujourd'hui), vous pouvez utiliser cette fonction comme modèle pour mettre en œuvre des fonctionnalités plus complexes.
- Cliquez sur "[Démo de modèle de plugin de fonction] Aujourd'hui dans l'histoire"
-```
-
-## Installation - Méthode 2: Utilisation de Docker
-
-1. ChatGPT uniquement (recommandé pour la plupart des gens)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Télécharger le projet
-cd gpt_academic # Accéder au chemin
-nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
-docker build -t gpt-academic . # Installer
-
-# (Dernière étape - choix1) Dans un environnement Linux, l'utilisation de `--net=host` est plus facile et rapide
-docker run --rm -it --net=host gpt-academic
-# (Dernière étape - choix 2) Dans un environnement macOS/Windows, seule l'option -p permet d'exposer le port du récipient (p.ex. 50923) au port de l'hôte.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (il faut connaître Docker)
-
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 3, conservez la solution 2. Modifiez la configuration de la solution 2 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + PanGu + RWKV (il faut connaître Docker)
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 2, conservez la solution 3. Modifiez la configuration de la solution 3 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-
-## Installation - Méthode 3: Autres méthodes de déploiement
-
-1. Comment utiliser une URL de proxy inversé / Microsoft Azure Cloud API
-Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
-
-2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
-Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
-
-3. Utilisation de WSL2 (sous-système Windows pour Linux)
-Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
-
-4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
-Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
-
-5. Utilisation de docker-compose
-Veuillez lire docker-compose.yml, puis suivre les instructions fournies.
-
-# Utilisation avancée
-## Personnalisation de nouveaux boutons pratiques / Plugins de fonctions personnalisées
-
-1. Personnalisation de nouveaux boutons pratiques (raccourcis académiques)
-Ouvrez core_functional.py avec n'importe quel éditeur de texte, ajoutez une entrée comme suit, puis redémarrez le programme. (Si le bouton a été ajouté avec succès et est visible, le préfixe et le suffixe prennent en charge les modifications à chaud et ne nécessitent pas le redémarrage du programme pour prendre effet.)
-Par exemple
-```
-"Super coller sens": {
- # Préfixe, sera ajouté avant votre entrée. Par exemple, pour décrire votre demande, telle que traduire, expliquer du code, faire la mise en forme, etc.
- "Prefix": "Veuillez traduire le contenu suivant en chinois, puis expliquer chaque terme proprement nommé qui y apparaît avec un tableau markdown:\n\n",
-
- # Suffixe, sera ajouté après votre entrée. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu d'entrée de guillemets.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Plugins de fonctions personnalisées
-
-Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
-Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
-Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
-
----
-# Latest Update
-
-## Nouvelles fonctionnalités en cours de déploiement.
-
-1. Fonction de sauvegarde de la conversation.
-Appelez simplement "Enregistrer la conversation actuelle" dans la zone de plugin de fonction pour enregistrer la conversation actuelle en tant que fichier html lisible et récupérable. De plus, dans la zone de plugin de fonction (menu déroulant), appelez "Charger une archive de l'historique de la conversation" pour restaurer la conversation précédente. Astuce : cliquer directement sur "Charger une archive de l'historique de la conversation" sans spécifier de fichier permet de consulter le cache d'archive html précédent. Cliquez sur "Supprimer tous les enregistrements locaux de l'historique de la conversation" pour supprimer le cache d'archive html.
-
-
-
-
-
-
-
-2. Générer un rapport. La plupart des plugins génèrent un rapport de travail après l'exécution.
-
-
-
-
-
-
-3. Conception de fonctionnalités modulaires avec une interface simple mais capable d'une fonctionnalité puissante.
-
-
-
-
-
-4. C'est un projet open source qui peut "se traduire de lui-même".
-
-
-
-
-5. Traduire d'autres projets open source n'est pas un problème.
-
-
-
-
-
-
-
-
-6. Fonction de décoration de live2d (désactivée par défaut, nécessite une modification de config.py).
-
-
-
-
-7. Prise en charge du modèle de langue MOSS.
-
-
-
-
-8. Génération d'images OpenAI.
-
-
-
-
-9. Analyse et synthèse vocales OpenAI.
-
-
-
-
-10. Correction de la totalité des erreurs de Latex.
-
-
-
-
-
-## Versions :
-- version 3.5 (À faire) : appel de toutes les fonctions de plugin de ce projet en langage naturel (priorité élevée)
-- version 3.4 (À faire) : amélioration du support multi-thread de chatglm en local
-- version 3.3 : Fonctionnalité intégrée d'informations d'internet
-- version 3.2 : La fonction du plugin de fonction prend désormais en charge des interfaces de paramètres plus nombreuses (fonction de sauvegarde, décodage de n'importe quel langage de code + interrogation simultanée de n'importe quelle combinaison de LLM)
-- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Support api2d, équilibrage de charge multi-clé api.
-- version 3.0 : Prise en charge de chatglm et autres LLM de petite taille.
-- version 2.6 : Refonte de la structure des plugins, amélioration de l'interactivité, ajout de plus de plugins.
-- version 2.5 : Auto-mise à jour, résolution des problèmes de texte trop long et de dépassement de jetons lors de la compilation du projet global.
-- version 2.4 : (1) Nouvelle fonction de traduction de texte intégral PDF ; (2) Nouvelle fonction de permutation de position de la zone d'entrée ; (3) Nouvelle option de mise en page verticale ; (4) Amélioration des fonctions multi-thread de plug-in.
-- version 2.3 : Amélioration de l'interactivité multithread.
-- version 2.2 : Les plugins de fonctions peuvent désormais être rechargés à chaud.
-- version 2.1 : Disposition pliable
-- version 2.0 : Introduction de plugins de fonctions modulaires
-- version 1.0 : Fonctionnalités de base
-
-gpt_academic développeur QQ groupe-2:610599535
-
-- Problèmes connus
- - Certains plugins de traduction de navigateur perturbent le fonctionnement de l'interface frontend de ce logiciel
- - Des versions gradio trop hautes ou trop basses provoquent de nombreuses anomalies
-
-## Référence et apprentissage
-
-```
-De nombreux autres excellents projets ont été référencés dans le code, notamment :
-
-# Projet 1 : ChatGLM-6B de Tsinghua :
-https://github.com/THUDM/ChatGLM-6B
-
-# Projet 2 : JittorLLMs de Tsinghua :
-https://github.com/Jittor/JittorLLMs
-
-# Projet 3 : Edge-GPT :
-https://github.com/acheong08/EdgeGPT
-
-# Projet 4 : ChuanhuChatGPT :
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projet 5 : ChatPaper :
-https://github.com/kaixindelele/ChatPaper
-
-# Plus :
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/cli.py b/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/cli.py
deleted file mode 100644
index 6c1f210a9af206a21bf4ab1e7a6411f0c96a280f..0000000000000000000000000000000000000000
--- a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/serve/cli.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import argparse
-import torch
-
-from mplug_owl2.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
-from mplug_owl2.conversation import conv_templates, SeparatorStyle
-from mplug_owl2.model.builder import load_pretrained_model
-from mplug_owl2.mm_utils import process_images, tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
-
-from PIL import Image
-
-import requests
-from PIL import Image
-from io import BytesIO
-from transformers import TextStreamer
-
-
-def disable_torch_init():
- """
- Disable the redundant torch default initialization to accelerate model creation.
- """
- import torch
- setattr(torch.nn.Linear, "reset_parameters", lambda self: None)
- setattr(torch.nn.LayerNorm, "reset_parameters", lambda self: None)
-
-
-def load_image(image_file):
- if image_file.startswith('http://') or image_file.startswith('https://'):
- response = requests.get(image_file)
- image = Image.open(BytesIO(response.content)).convert('RGB')
- else:
- image = Image.open(image_file).convert('RGB')
- return image
-
-
-def main(args):
- # Model
- disable_torch_init()
-
- model_name = get_model_name_from_path(args.model_path)
- tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device)
-
- conv_mode = "mplug_owl2"
-
- if args.conv_mode is not None and conv_mode != args.conv_mode:
- print('[WARNING] the auto inferred conversation mode is {}, while `--conv-mode` is {}, using {}'.format(conv_mode, args.conv_mode, args.conv_mode))
- else:
- args.conv_mode = conv_mode
-
- conv = conv_templates[args.conv_mode].copy()
- roles = conv.roles
-
- image = load_image(args.image_file)
- # Similar operation in model_worker.py
- image_tensor = process_images([image], image_processor, args)
- if type(image_tensor) is list:
- image_tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor]
- else:
- image_tensor = image_tensor.to(model.device, dtype=torch.float16)
-
- while True:
- try:
- inp = input(f"{roles[0]}: ")
- except EOFError:
- inp = ""
- if not inp:
- print("exit...")
- break
-
- print(f"{roles[1]}: ", end="")
-
- if image is not None:
- # first message
- inp = DEFAULT_IMAGE_TOKEN + inp
- conv.append_message(conv.roles[0], inp)
- image = None
- else:
- # later messages
- conv.append_message(conv.roles[0], inp)
- conv.append_message(conv.roles[1], None)
- prompt = conv.get_prompt()
-
- input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).to(model.device)
- stop_str = conv.sep if conv.sep_style not in [SeparatorStyle.TWO, SeparatorStyle.TWO_NO_SYS] else conv.sep2
- keywords = [stop_str]
- stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
- streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
-
- with torch.inference_mode():
- output_ids = model.generate(
- input_ids,
- images=image_tensor,
- do_sample=True,
- temperature=args.temperature,
- max_new_tokens=args.max_new_tokens,
- streamer=streamer,
- use_cache=True,
- stopping_criteria=[stopping_criteria])
-
- outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:]).strip()
- conv.messages[-1][-1] = outputs
-
- if args.debug:
- print("\n", {"prompt": prompt, "outputs": outputs}, "\n")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
- parser.add_argument("--model-base", type=str, default=None)
- parser.add_argument("--image-file", type=str, required=True)
- parser.add_argument("--device", type=str, default="cuda")
- parser.add_argument("--conv-mode", type=str, default=None)
- parser.add_argument("--temperature", type=float, default=0.2)
- parser.add_argument("--max-new-tokens", type=int, default=512)
- parser.add_argument("--load-8bit", action="store_true")
- parser.add_argument("--load-4bit", action="store_true")
- parser.add_argument("--debug", action="store_true")
- parser.add_argument("--image-aspect-ratio", type=str, default='pad')
- args = parser.parse_args()
- main(args)
\ No newline at end of file
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/README.md b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/README.md
deleted file mode 100644
index 3b79d8a133d8df68a4d8f26e0cc66debd3e26881..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/README.md
+++ /dev/null
@@ -1,191 +0,0 @@
-# Make-A-Protagonist
-
-This repository is the official implementation of **Make-A-Protagonist**.
-
-**[Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts](https://arxiv.org/abs/2305.08850)**
-
-[Yuyang Zhao](https://yuyangzhao.com), [Enze Xie](https://xieenze.github.io/), [Lanqing Hong](https://scholar.google.com.sg/citations?user=2p7x6OUAAAAJ&hl=en), [Zhenguo Li](https://scholar.google.com.sg/citations?user=XboZC1AAAAAJ&hl=en), [Gim Hee Lee](https://www.comp.nus.edu.sg/~leegh/)
-
-
-[](https://opensource.org/licenses/Apache-2.0) [](https://make-a-protagonist.github.io/) [](https://arxiv.org/abs/2305.08850)
-
-
-
-
-
-The first framework for generic video editing with both visual and textual clues.
-
-
-
-## Abstract
-> The text-driven image and video diffusion models have achieved unprecedented success in generating realistic and diverse content. Recently, the editing and variation of existing images and videos in diffusion-based generative models have garnered significant attention. However, previous works are limited to editing content with text or providing coarse personalization using a single visual clue, rendering them unsuitable for indescribable content that requires fine-grained and detailed control. In this regard, we propose a generic video editing framework called Make-A-Protagonist, which utilizes textual and visual clues to edit videos with the goal of empowering individuals to become the protagonists. Specifically, we leverage multiple experts to parse source video, target visual and textual clues, and propose a visual-textual-based video generation model that employs mask-guided denoising sampling to generate the desired output. Extensive results demonstrate the versatile and remarkable editing capabilities of Make-A-Protagonist.
-
-## News
-- [16/05/2023] Code released!
-
-### Todo
-- [ ] Release training code for ControlNet UnCLIP Small
-- [ ] Release inference demo
-
-
-## Setup
-
-### Requirements
-- Python 3.9 and Pytorch 1.13.1
-- xformers 0.0.17
-- Other packages in `requirements.txt`
-- Build GroundedSAM expert
-```bash
-cd experts/GroundedSAM
-python -m pip install -e GroundingDINO
-python -m pip install -e segment_anything
-```
-
-### Weights
-
-The following weights from HuggingFace are used in this project. You can download them into `checkpoints` or load them from HuggingFace repo.
-- [Stable Diffusion UnCLIP Small](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip-small)
-- [BLIP-2 Flan T5-xL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)
-- [CLIP ViT-L](https://huggingface.co/openai/clip-vit-large-patch14)
-- [DALL-E 2 Prior](https://huggingface.co/kakaobrain/karlo-v1-alpha)
-
-ControlNet for Stable Diffusion UnCLIP Small should be downloaded manually into `checkpoints`:
-- [ControlNet UnCLIP Small](https://huggingface.co/Make-A-Protagonist/Make-A-Protagonist/tree/main)
-
-The code for training these models will be released soon.
-
-Pre-trained model for other experts should be downloaded manually into `checkpoints`:
-- [GroundingDINO](https://github.com/IDEA-Research/GroundingDINO) `wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth`
-- [Segment Anything](https://github.com/facebookresearch/segment-anything) `wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth`
-- [XMem](https://github.com/hkchengrex/XMem) `wget https://github.com/hkchengrex/XMem/releases/download/v1.0/XMem.pth`
-
-
-
-## Usage
-
-### Data Preprocess
-
-#### Source Video Parsing
-
-**Captioning and VQA**:
-```bash
-python experts/blip_inference.py -d data//images
-```
-
-**Protagonist Segmentation**:
-
-- Frame segmentation with GroundedSAM
-```bash
-python experts/grounded_sam_inference.py -d data//images/0000.jpg -t
-```
-
-- Video object segmentation through the video
-```bash
-python experts/xmem_inference.py -d data//images -v --mask_dir .mask
-```
-
-**Control Signals Extraction**:
-```bash
-python experts/controlnet_signal_extraction.py -d data//images -c
-```
-Currently we only support two types of control signals: depth and openposefull.
-
-#### Visual Clue Parsing
-
-**Reference Protagonist Segmentation**:
-```bash
-python experts/grounded_sam_inference.py -d data//reference_images -t --masked_out
-```
-
-### Training
-
-To fine-tune the text-to-image diffusion models with visual and textual clues, run this command:
-
-```bash
-python train.py --config="configs//train.yaml"
-```
-
-Note: At least 24 GB is requires to train the model.
-
-### Inference
-
-Once the training is done, run inference:
-
-```bash
-python eval.py --config="configs//eval.yaml"
-```
-**Applications**: Three applications are supported by Make-A-Protagonist, which can be achieved by modifying the inference configuration file.
-- Protagonist Editing: `source_protagonist: true`
-- Background Editing: `source_background: true`
-- Text-to-Video Editing with Protagonist: `source_protagonist: false & source_background: false`
-
-## Results
-
-
-
-
Input Video
-
Reference Image
-
Generated Video
-
-
-
-
-
-
-
-
"A man walking down the street"
-
-
"A panda walking down the snowy street"
-
-
-
-
-
-
-
-
-
"A man playing basketball"
-
-
"A man playing basketball on the beach, anime style"
-
-
-
-
-
-
-
-
-
"A man walking down the street"
-
-
"Elon Musk walking down the street"
-
-
-
-
-
-
-
-
-
"A Suzuki Jimny driving down a mountain road"
-
-
"A Suzuki Jimny driving down a mountain road in the rain"
-
-
-
-
-
-
-## Citation
-If you make use of our work, please cite our paper.
-```bibtex
-@article{zhao2023makeaprotagonist,
- title={Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts},
- author={Zhao, Yuyang and Xie, Enze and Hong, Lanqing and Li, Zhenguo and Lee, Gim Hee},
- journal={arXiv preprint arXiv:2305.08850},
- year={2023}
-}
-```
-
-## Acknowledgements
-
-This code is heavily derived from [diffusers](https://github.com/huggingface/diffusers) and [Tune-A-Video](https://github.com/showlab/Tune-A-Video). If you use this code in your research, please also acknowledge their work.
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py
deleted file mode 100644
index 052df6220595a1b39b7e2aea37ca4872d113dfd2..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py
+++ /dev/null
@@ -1,395 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR model and criterion classes.
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)
-# Copyright (c) 2020 SenseTime. All Rights Reserved.
-# ------------------------------------------------------------------------
-import copy
-from typing import List
-
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torchvision.ops.boxes import nms
-from transformers import AutoTokenizer, BertModel, BertTokenizer, RobertaModel, RobertaTokenizerFast
-
-from groundingdino.util import box_ops, get_tokenlizer
-from groundingdino.util.misc import (
- NestedTensor,
- accuracy,
- get_world_size,
- interpolate,
- inverse_sigmoid,
- is_dist_avail_and_initialized,
- nested_tensor_from_tensor_list,
-)
-from groundingdino.util.utils import get_phrases_from_posmap
-from groundingdino.util.visualizer import COCOVisualizer
-from groundingdino.util.vl_utils import create_positive_map_from_span
-
-from ..registry import MODULE_BUILD_FUNCS
-from .backbone import build_backbone
-from .bertwarper import (
- BertModelWarper,
- generate_masks_with_special_tokens,
- generate_masks_with_special_tokens_and_transfer_map,
-)
-from .transformer import build_transformer
-from .utils import MLP, ContrastiveEmbed, sigmoid_focal_loss
-
-
-class GroundingDINO(nn.Module):
- """This is the Cross-Attention Detector module that performs object detection"""
-
- def __init__(
- self,
- backbone,
- transformer,
- num_queries,
- aux_loss=False,
- iter_update=False,
- query_dim=2,
- num_feature_levels=1,
- nheads=8,
- # two stage
- two_stage_type="no", # ['no', 'standard']
- dec_pred_bbox_embed_share=True,
- two_stage_class_embed_share=True,
- two_stage_bbox_embed_share=True,
- num_patterns=0,
- dn_number=100,
- dn_box_noise_scale=0.4,
- dn_label_noise_ratio=0.5,
- dn_labelbook_size=100,
- text_encoder_type="bert-base-uncased",
- sub_sentence_present=True,
- max_text_len=256,
- ):
- """Initializes the model.
- Parameters:
- backbone: torch module of the backbone to be used. See backbone.py
- transformer: torch module of the transformer architecture. See transformer.py
- num_queries: number of object queries, ie detection slot. This is the maximal number of objects
- Conditional DETR can detect in a single image. For COCO, we recommend 100 queries.
- aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.
- """
- super().__init__()
- self.num_queries = num_queries
- self.transformer = transformer
- self.hidden_dim = hidden_dim = transformer.d_model
- self.num_feature_levels = num_feature_levels
- self.nheads = nheads
- self.max_text_len = 256
- self.sub_sentence_present = sub_sentence_present
-
- # setting query dim
- self.query_dim = query_dim
- assert query_dim == 4
-
- # for dn training
- self.num_patterns = num_patterns
- self.dn_number = dn_number
- self.dn_box_noise_scale = dn_box_noise_scale
- self.dn_label_noise_ratio = dn_label_noise_ratio
- self.dn_labelbook_size = dn_labelbook_size
-
- # bert
- self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type)
- self.bert = get_tokenlizer.get_pretrained_language_model(text_encoder_type)
- self.bert.pooler.dense.weight.requires_grad_(False)
- self.bert.pooler.dense.bias.requires_grad_(False)
- self.bert = BertModelWarper(bert_model=self.bert)
-
- self.feat_map = nn.Linear(self.bert.config.hidden_size, self.hidden_dim, bias=True)
- nn.init.constant_(self.feat_map.bias.data, 0)
- nn.init.xavier_uniform_(self.feat_map.weight.data)
- # freeze
-
- # special tokens
- self.specical_tokens = self.tokenizer.convert_tokens_to_ids(["[CLS]", "[SEP]", ".", "?"])
-
- # prepare input projection layers
- if num_feature_levels > 1:
- num_backbone_outs = len(backbone.num_channels)
- input_proj_list = []
- for _ in range(num_backbone_outs):
- in_channels = backbone.num_channels[_]
- input_proj_list.append(
- nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- )
- )
- for _ in range(num_feature_levels - num_backbone_outs):
- input_proj_list.append(
- nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(32, hidden_dim),
- )
- )
- in_channels = hidden_dim
- self.input_proj = nn.ModuleList(input_proj_list)
- else:
- assert two_stage_type == "no", "two_stage_type should be no if num_feature_levels=1 !!!"
- self.input_proj = nn.ModuleList(
- [
- nn.Sequential(
- nn.Conv2d(backbone.num_channels[-1], hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- )
- ]
- )
-
- self.backbone = backbone
- self.aux_loss = aux_loss
- self.box_pred_damping = box_pred_damping = None
-
- self.iter_update = iter_update
- assert iter_update, "Why not iter_update?"
-
- # prepare pred layers
- self.dec_pred_bbox_embed_share = dec_pred_bbox_embed_share
- # prepare class & box embed
- _class_embed = ContrastiveEmbed()
-
- _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
- nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0)
- nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0)
-
- if dec_pred_bbox_embed_share:
- box_embed_layerlist = [_bbox_embed for i in range(transformer.num_decoder_layers)]
- else:
- box_embed_layerlist = [
- copy.deepcopy(_bbox_embed) for i in range(transformer.num_decoder_layers)
- ]
- class_embed_layerlist = [_class_embed for i in range(transformer.num_decoder_layers)]
- self.bbox_embed = nn.ModuleList(box_embed_layerlist)
- self.class_embed = nn.ModuleList(class_embed_layerlist)
- self.transformer.decoder.bbox_embed = self.bbox_embed
- self.transformer.decoder.class_embed = self.class_embed
-
- # two stage
- self.two_stage_type = two_stage_type
- assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format(
- two_stage_type
- )
- if two_stage_type != "no":
- if two_stage_bbox_embed_share:
- assert dec_pred_bbox_embed_share
- self.transformer.enc_out_bbox_embed = _bbox_embed
- else:
- self.transformer.enc_out_bbox_embed = copy.deepcopy(_bbox_embed)
-
- if two_stage_class_embed_share:
- assert dec_pred_bbox_embed_share
- self.transformer.enc_out_class_embed = _class_embed
- else:
- self.transformer.enc_out_class_embed = copy.deepcopy(_class_embed)
-
- self.refpoint_embed = None
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- # init input_proj
- for proj in self.input_proj:
- nn.init.xavier_uniform_(proj[0].weight, gain=1)
- nn.init.constant_(proj[0].bias, 0)
-
- def init_ref_points(self, use_num_queries):
- self.refpoint_embed = nn.Embedding(use_num_queries, self.query_dim)
-
- def forward(self, samples: NestedTensor, targets: List = None, **kw):
- """The forward expects a NestedTensor, which consists of:
- - samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
-
- It returns a dict with the following elements:
- - "pred_logits": the classification logits (including no-object) for all queries.
- Shape= [batch_size x num_queries x num_classes]
- - "pred_boxes": The normalized boxes coordinates for all queries, represented as
- (center_x, center_y, width, height). These values are normalized in [0, 1],
- relative to the size of each individual image (disregarding possible padding).
- See PostProcess for information on how to retrieve the unnormalized bounding box.
- - "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of
- dictionnaries containing the two above keys for each decoder layer.
- """
- if targets is None:
- captions = kw["captions"]
- else:
- captions = [t["caption"] for t in targets]
- len(captions)
-
- # encoder texts
- tokenized = self.tokenizer(captions, padding="longest", return_tensors="pt").to(
- samples.device
- )
- (
- text_self_attention_masks,
- position_ids,
- cate_to_token_mask_list,
- ) = generate_masks_with_special_tokens_and_transfer_map(
- tokenized, self.specical_tokens, self.tokenizer
- )
-
- if text_self_attention_masks.shape[1] > self.max_text_len:
- text_self_attention_masks = text_self_attention_masks[
- :, : self.max_text_len, : self.max_text_len
- ]
- position_ids = position_ids[:, : self.max_text_len]
- tokenized["input_ids"] = tokenized["input_ids"][:, : self.max_text_len]
- tokenized["attention_mask"] = tokenized["attention_mask"][:, : self.max_text_len]
- tokenized["token_type_ids"] = tokenized["token_type_ids"][:, : self.max_text_len]
-
- # extract text embeddings
- if self.sub_sentence_present:
- tokenized_for_encoder = {k: v for k, v in tokenized.items() if k != "attention_mask"}
- tokenized_for_encoder["attention_mask"] = text_self_attention_masks
- tokenized_for_encoder["position_ids"] = position_ids
- else:
- # import ipdb; ipdb.set_trace()
- tokenized_for_encoder = tokenized
-
- bert_output = self.bert(**tokenized_for_encoder) # bs, 195, 768
-
- encoded_text = self.feat_map(bert_output["last_hidden_state"]) # bs, 195, d_model
- text_token_mask = tokenized.attention_mask.bool() # bs, 195
- # text_token_mask: True for nomask, False for mask
- # text_self_attention_masks: True for nomask, False for mask
-
- if encoded_text.shape[1] > self.max_text_len:
- encoded_text = encoded_text[:, : self.max_text_len, :]
- text_token_mask = text_token_mask[:, : self.max_text_len]
- position_ids = position_ids[:, : self.max_text_len]
- text_self_attention_masks = text_self_attention_masks[
- :, : self.max_text_len, : self.max_text_len
- ]
-
- text_dict = {
- "encoded_text": encoded_text, # bs, 195, d_model
- "text_token_mask": text_token_mask, # bs, 195
- "position_ids": position_ids, # bs, 195
- "text_self_attention_masks": text_self_attention_masks, # bs, 195,195
- }
-
- # import ipdb; ipdb.set_trace()
-
- if isinstance(samples, (list, torch.Tensor)):
- samples = nested_tensor_from_tensor_list(samples)
- features, poss = self.backbone(samples)
-
- srcs = []
- masks = []
- for l, feat in enumerate(features):
- src, mask = feat.decompose()
- srcs.append(self.input_proj[l](src))
- masks.append(mask)
- assert mask is not None
- if self.num_feature_levels > len(srcs):
- _len_srcs = len(srcs)
- for l in range(_len_srcs, self.num_feature_levels):
- if l == _len_srcs:
- src = self.input_proj[l](features[-1].tensors)
- else:
- src = self.input_proj[l](srcs[-1])
- m = samples.mask
- mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]
- pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)
- srcs.append(src)
- masks.append(mask)
- poss.append(pos_l)
-
- input_query_bbox = input_query_label = attn_mask = dn_meta = None
- hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
- srcs, masks, input_query_bbox, poss, input_query_label, attn_mask, text_dict
- )
-
- # deformable-detr-like anchor update
- outputs_coord_list = []
- for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate(
- zip(reference[:-1], self.bbox_embed, hs)
- ):
- layer_delta_unsig = layer_bbox_embed(layer_hs)
- layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig)
- layer_outputs_unsig = layer_outputs_unsig.sigmoid()
- outputs_coord_list.append(layer_outputs_unsig)
- outputs_coord_list = torch.stack(outputs_coord_list)
-
- # output
- outputs_class = torch.stack(
- [
- layer_cls_embed(layer_hs, text_dict)
- for layer_cls_embed, layer_hs in zip(self.class_embed, hs)
- ]
- )
- out = {"pred_logits": outputs_class[-1], "pred_boxes": outputs_coord_list[-1]}
-
- # # for intermediate outputs
- # if self.aux_loss:
- # out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord_list)
-
- # # for encoder output
- # if hs_enc is not None:
- # # prepare intermediate outputs
- # interm_coord = ref_enc[-1]
- # interm_class = self.transformer.enc_out_class_embed(hs_enc[-1], text_dict)
- # out['interm_outputs'] = {'pred_logits': interm_class, 'pred_boxes': interm_coord}
- # out['interm_outputs_for_matching_pre'] = {'pred_logits': interm_class, 'pred_boxes': init_box_proposal}
-
- return out
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_coord):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- return [
- {"pred_logits": a, "pred_boxes": b}
- for a, b in zip(outputs_class[:-1], outputs_coord[:-1])
- ]
-
-
-@MODULE_BUILD_FUNCS.registe_with_name(module_name="groundingdino")
-def build_groundingdino(args):
-
- backbone = build_backbone(args)
- transformer = build_transformer(args)
-
- dn_labelbook_size = args.dn_labelbook_size
- dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share
- sub_sentence_present = args.sub_sentence_present
-
- model = GroundingDINO(
- backbone,
- transformer,
- num_queries=args.num_queries,
- aux_loss=True,
- iter_update=True,
- query_dim=4,
- num_feature_levels=args.num_feature_levels,
- nheads=args.nheads,
- dec_pred_bbox_embed_share=dec_pred_bbox_embed_share,
- two_stage_type=args.two_stage_type,
- two_stage_bbox_embed_share=args.two_stage_bbox_embed_share,
- two_stage_class_embed_share=args.two_stage_class_embed_share,
- num_patterns=args.num_patterns,
- dn_number=0,
- dn_box_noise_scale=args.dn_box_noise_scale,
- dn_label_noise_ratio=args.dn_label_noise_ratio,
- dn_labelbook_size=dn_labelbook_size,
- text_encoder_type=args.text_encoder_type,
- sub_sentence_present=sub_sentence_present,
- max_text_len=args.max_text_len,
- )
-
- return model
diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_preprocessing.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_preprocessing.py
deleted file mode 100644
index 92d37056e644f889ac4ecc7e590cd49120012802..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_preprocessing.py
+++ /dev/null
@@ -1,156 +0,0 @@
-# coding=utf-8
-# Copyright 2020 The Google AI Perception Team Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Process frame-by-frame keypoints detection results to pkl."""
-import glob
-import json
-import multiprocessing
-import os
-import pickle
-
-from absl import app
-from absl import flags
-from absl import logging
-from aist_plusplus.loader import AISTDataset
-import numpy as np
-
-FLAGS = flags.FLAGS
-flags.DEFINE_string(
- 'keypoints_dir',
- '/usr/local/google/home/ruilongli/data/AIST_plusplus_v4/posenet_2stage_pose_10M_60fps_all/',
- 'input local dictionary that stores 2D keypoints detection results in json.'
-)
-flags.DEFINE_string(
- 'save_dir',
- '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/keypoints2d/',
- 'output local dictionary that stores 2D keypoints detection results in pkl.'
-)
-np.random.seed(0)
-
-
-def array_nan(shape, dtype=np.float32):
- array = np.empty(shape, dtype=dtype)
- array[:] = np.nan
- return array
-
-
-def load_keypoints2d_file(file_path, njoints=17):
- """load 2D keypoints from keypoint detection results.
-
- Only one person is extracted from the results. If there are multiple
- persons in the prediction results, we select the one with the highest
- detection score.
-
- Args:
- file_path: the json file path.
- njoints: number of joints in the keypoint defination.
-
- Returns:
- A `np.array` with the shape of [njoints, 3].
- """
- keypoint = array_nan((njoints, 3), dtype=np.float32)
- det_score = 0.0
-
- try:
- with open(file_path, 'r') as f:
- data = json.load(f)
- except Exception as e: # pylint: disable=broad-except
- logging.warning(e)
- return keypoint, det_score
-
- det_scores = np.array(data['detection_scores'])
- keypoints = np.array(data['keypoints']).reshape((-1, njoints, 3))
-
- # The detection results may contain zero person or multiple people.
- if det_scores.shape[0] == 0:
- # There is no person in this image. We set NaN to this frame.
- return keypoint, det_score
- else:
- # There are multiple people (>=1) in this image. We select the one with
- # the highest detection score.
- idx = np.argmax(det_scores)
- keypoint = keypoints[idx]
- det_score = det_scores[idx]
- return keypoint, det_score
-
-
-def load_keypoints2d(data_dir, seq_name, njoints=17):
- """Load 2D keypoints predictions for a set of multi-view videos."""
- # Parsing sequence name to multi-view video names
- video_names = [AISTDataset.get_video_name(seq_name, view)
- for view in AISTDataset.VIEWS]
-
- # In case frames are missing, we first scan all views to get a union
- # of timestamps.
- paths_cache = {}
- timestamps = []
- for video_name in video_names:
- paths = sorted(glob.glob(os.path.join(data_dir, video_name, '*.json')))
- paths_cache[video_name] = paths
- timestamps += [int(p.split('.')[0].split('_')[-1]) for p in paths]
- timestamps = np.array(sorted(list(set(timestamps)))) # (N,)
-
- # Then we load all frames according to timestamps.
- keypoints2d = []
- det_scores = []
- for video_name in video_names:
- paths = [
- os.path.join(data_dir, video_name, f'{video_name}_{ts}.json')
- for ts in timestamps
- ]
- keypoints2d_per_view = []
- det_scores_per_view = []
- for path in paths:
- keypoint, det_score = load_keypoints2d_file(path, njoints=njoints)
- keypoints2d_per_view.append(keypoint)
- det_scores_per_view.append(det_score)
- keypoints2d.append(keypoints2d_per_view)
- det_scores.append(det_scores_per_view)
-
- keypoints2d = np.array(
- keypoints2d, dtype=np.float32) # (nviews, N, njoints, 3)
- det_scores = np.array(
- det_scores, dtype=np.float32) # (nviews, N)
- return keypoints2d, det_scores, timestamps
-
-
-def process_and_save(seq_name):
- keypoints2d, det_scores, timestamps = load_keypoints2d(
- FLAGS.keypoints_dir, seq_name=seq_name, njoints=17)
- os.makedirs(FLAGS.save_dir, exist_ok=True)
- save_path = os.path.join(FLAGS.save_dir, f'{seq_name}.pkl')
- with open(save_path, 'wb') as f:
- pickle.dump({
- 'keypoints2d': keypoints2d,
- 'det_scores': det_scores,
- 'timestamps': timestamps,
- }, f, protocol=pickle.HIGHEST_PROTOCOL)
-
-
-def main(_):
- video_names = os.listdir(FLAGS.keypoints_dir)
- video_names = [
- video_name for video_name in video_names
- if len(video_name.split('_')) == 6
- ]
- seq_names = list(set([
- AISTDataset.get_seq_name(video_name)[0] for video_name in video_names]))
-
- pool = multiprocessing.Pool(16)
- pool.map(process_and_save, seq_names)
-
-
-if __name__ == '__main__':
- app.run(main)
-
diff --git "a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/pages/4_\360\237\223\226_Readme.py" "b/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/pages/4_\360\237\223\226_Readme.py"
deleted file mode 100644
index 6cb04afd8eef78e28f6b6f57d305f6608f096268..0000000000000000000000000000000000000000
--- "a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/pages/4_\360\237\223\226_Readme.py"
+++ /dev/null
@@ -1,38 +0,0 @@
-import streamlit as st
-
-st.title("White-box Style Transfer Editing")
-
-print(st.session_state["user"], " opened readme")
-st.markdown("""
- This app demonstrates the editing capabilities of the White-box Style Transfer Editing (WISE) framework.
- It optimizes the parameters of classical image processing filters to match a given style image.
-
- ### How does it work?
- We provide a small stylization effect that contains several filters such as bump mapping or edge enhancement that can be optimized. The optimization yields so-called parameter masks, which contain per pixel parameter settings of each filter.
-
- ### Global Editing
- - On the first page select existing content/style combinations or upload images to optimize, which takes ~5min.
- - After the effect has been applied, use the parameter sliders to adjust a parameter value globally
-
- ### Local Editing
- - On the "apply preset" page, we defined several parameter presets that can be drawn on the image. Press "Apply" to make the changes permanent
- - On the " local editing" page, individual parameter masks can be edited regionally. Choose the parameter on the left sidebar, and use the parameter strength slider to either increase or decrease the strength of the drawn strokes
- - Strokes on the drawing canvas (left column) are updated in real-time on the result in the right column.
- - Strokes stay on the canvas unless manually deleted by clicking the trash button. To remove them from the canvas after each stroke, tick the corresponding checkbox in the sidebar.
-
- ### xDoG Prediction
- - demonstrates parameter prediction networks for line drawings using extended difference of gaussians(xDoG), trained on the APdrawing dataset
- - The effect pipeline uses a post-processing cnn, to stylize features which are not able to be stylized by xDoG.
- - To see the xdog output without post-processing, click the checkmark. Control the global parameters of xDoG using the sliders
-
- ### Links & Paper
- **[Project page](https://ivpg.hpi3d.de/wise/),
- [arxiv link](https://arxiv.org/abs/2207.14606),
- [demo code](https://github.com/MaxReimann/WISE-Editing)**
-
- "WISE: Whitebox Image Stylization by Example-based Learning", by Winfried Lötzsch*, Max Reimann*, Martin Büßemeyer, Amir Semmo, Jürgen Döllner, Matthias Trapp, in ECCV 2022
-
- ### Further notes
- Pull Requests and further improvements are very welcome.
- Please note that the shown effect is a minimal pipeline in terms of stylization capability, the much more feature-rich oilpaint and watercolor pipelines we show in our ECCV paper cannot be open-sourced due to IP reasons.
-""")
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/file_client.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/file_client.py
deleted file mode 100644
index 950f0c1aeab14b8e308a7455ccd64a95b5d98add..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/file_client.py
+++ /dev/null
@@ -1,1148 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import inspect
-import os
-import os.path as osp
-import re
-import tempfile
-import warnings
-from abc import ABCMeta, abstractmethod
-from contextlib import contextmanager
-from pathlib import Path
-from typing import Iterable, Iterator, Optional, Tuple, Union
-from urllib.request import urlopen
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.utils.misc import has_method
-from annotator.uniformer.mmcv.utils.path import is_filepath
-
-
-class BaseStorageBackend(metaclass=ABCMeta):
- """Abstract class of storage backends.
-
- All backends need to implement two apis: ``get()`` and ``get_text()``.
- ``get()`` reads the file as a byte stream and ``get_text()`` reads the file
- as texts.
- """
-
- # a flag to indicate whether the backend can create a symlink for a file
- _allow_symlink = False
-
- @property
- def name(self):
- return self.__class__.__name__
-
- @property
- def allow_symlink(self):
- return self._allow_symlink
-
- @abstractmethod
- def get(self, filepath):
- pass
-
- @abstractmethod
- def get_text(self, filepath):
- pass
-
-
-class CephBackend(BaseStorageBackend):
- """Ceph storage backend (for internal use).
-
- Args:
- path_mapping (dict|None): path mapping dict from local path to Petrel
- path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath``
- will be replaced by ``dst``. Default: None.
-
- .. warning::
- :class:`mmcv.fileio.file_client.CephBackend` will be deprecated,
- please use :class:`mmcv.fileio.file_client.PetrelBackend` instead.
- """
-
- def __init__(self, path_mapping=None):
- try:
- import ceph
- except ImportError:
- raise ImportError('Please install ceph to enable CephBackend.')
-
- warnings.warn(
- 'CephBackend will be deprecated, please use PetrelBackend instead')
- self._client = ceph.S3Client()
- assert isinstance(path_mapping, dict) or path_mapping is None
- self.path_mapping = path_mapping
-
- def get(self, filepath):
- filepath = str(filepath)
- if self.path_mapping is not None:
- for k, v in self.path_mapping.items():
- filepath = filepath.replace(k, v)
- value = self._client.Get(filepath)
- value_buf = memoryview(value)
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class PetrelBackend(BaseStorageBackend):
- """Petrel storage backend (for internal use).
-
- PetrelBackend supports reading and writing data to multiple clusters.
- If the file path contains the cluster name, PetrelBackend will read data
- from specified cluster or write data to it. Otherwise, PetrelBackend will
- access the default cluster.
-
- Args:
- path_mapping (dict, optional): Path mapping dict from local path to
- Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in
- ``filepath`` will be replaced by ``dst``. Default: None.
- enable_mc (bool, optional): Whether to enable memcached support.
- Default: True.
-
- Examples:
- >>> filepath1 = 's3://path/of/file'
- >>> filepath2 = 'cluster-name:s3://path/of/file'
- >>> client = PetrelBackend()
- >>> client.get(filepath1) # get data from default cluster
- >>> client.get(filepath2) # get data from 'cluster-name' cluster
- """
-
- def __init__(self,
- path_mapping: Optional[dict] = None,
- enable_mc: bool = True):
- try:
- from petrel_client import client
- except ImportError:
- raise ImportError('Please install petrel_client to enable '
- 'PetrelBackend.')
-
- self._client = client.Client(enable_mc=enable_mc)
- assert isinstance(path_mapping, dict) or path_mapping is None
- self.path_mapping = path_mapping
-
- def _map_path(self, filepath: Union[str, Path]) -> str:
- """Map ``filepath`` to a string path whose prefix will be replaced by
- :attr:`self.path_mapping`.
-
- Args:
- filepath (str): Path to be mapped.
- """
- filepath = str(filepath)
- if self.path_mapping is not None:
- for k, v in self.path_mapping.items():
- filepath = filepath.replace(k, v)
- return filepath
-
- def _format_path(self, filepath: str) -> str:
- """Convert a ``filepath`` to standard format of petrel oss.
-
- If the ``filepath`` is concatenated by ``os.path.join``, in a Windows
- environment, the ``filepath`` will be the format of
- 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the
- above ``filepath`` will be converted to 's3://bucket_name/image.jpg'.
-
- Args:
- filepath (str): Path to be formatted.
- """
- return re.sub(r'\\+', '/', filepath)
-
- def get(self, filepath: Union[str, Path]) -> memoryview:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- memoryview: A memory view of expected bytes object to avoid
- copying. The memoryview object can be converted to bytes by
- ``value_buf.tobytes()``.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- value = self._client.Get(filepath)
- value_buf = memoryview(value)
- return value_buf
-
- def get_text(self,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- return str(self.get(filepath), encoding=encoding)
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Save data to a given ``filepath``.
-
- Args:
- obj (bytes): Data to be saved.
- filepath (str or Path): Path to write data.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- self._client.put(filepath, obj)
-
- def put_text(self,
- obj: str,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> None:
- """Save data to a given ``filepath``.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str): The encoding format used to encode the ``obj``.
- Default: 'utf-8'.
- """
- self.put(bytes(obj, encoding=encoding), filepath)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str or Path): Path to be removed.
- """
- if not has_method(self._client, 'delete'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `delete` method, please use a higher version or dev'
- ' branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- self._client.delete(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- if not (has_method(self._client, 'contains')
- and has_method(self._client, 'isdir')):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `contains` and `isdir` methods, please use a higher'
- 'version or dev branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.contains(filepath) or self._client.isdir(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- if not has_method(self._client, 'isdir'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `isdir` method, please use a higher version or dev'
- ' branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- if not has_method(self._client, 'contains'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `contains` method, please use a higher version or '
- 'dev branch instead.'))
-
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- return self._client.contains(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result after concatenation.
- """
- filepath = self._format_path(self._map_path(filepath))
- if filepath.endswith('/'):
- filepath = filepath[:-1]
- formatted_paths = [filepath]
- for path in filepaths:
- formatted_paths.append(self._format_path(self._map_path(path)))
- return '/'.join(formatted_paths)
-
- @contextmanager
- def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]:
- """Download a file from ``filepath`` and return a temporary path.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Args:
- filepath (str | Path): Download a file from ``filepath``.
-
- Examples:
- >>> client = PetrelBackend()
- >>> # After existing from the ``with`` clause,
- >>> # the path will be removed
- >>> with client.get_local_path('s3://path/of/your/file') as path:
- ... # do something here
-
- Yields:
- Iterable[str]: Only yield one temporary path.
- """
- filepath = self._map_path(filepath)
- filepath = self._format_path(filepath)
- assert self.isfile(filepath)
- try:
- f = tempfile.NamedTemporaryFile(delete=False)
- f.write(self.get(filepath))
- f.close()
- yield f.name
- finally:
- os.remove(f.name)
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- Petrel has no concept of directories but it simulates the directory
- hierarchy in the filesystem through public prefixes. In addition,
- if the returned path ends with '/', it means the path is a public
- prefix which is a logical directory.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
- In addition, the returned path of directory will not contains the
- suffix '/' which is consistent with other backends.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- if not has_method(self._client, 'list'):
- raise NotImplementedError(
- ('Current version of Petrel Python SDK has not supported '
- 'the `list` method, please use a higher version or dev'
- ' branch instead.'))
-
- dir_path = self._map_path(dir_path)
- dir_path = self._format_path(dir_path)
- if list_dir and suffix is not None:
- raise TypeError(
- '`list_dir` should be False when `suffix` is not None')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('`suffix` must be a string or tuple of strings')
-
- # Petrel's simulated directory hierarchy assumes that directory paths
- # should end with `/`
- if not dir_path.endswith('/'):
- dir_path += '/'
-
- root = dir_path
-
- def _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive):
- for path in self._client.list(dir_path):
- # the `self.isdir` is not used here to determine whether path
- # is a directory, because `self.isdir` relies on
- # `self._client.list`
- if path.endswith('/'): # a directory path
- next_dir_path = self.join_path(dir_path, path)
- if list_dir:
- # get the relative path and exclude the last
- # character '/'
- rel_dir = next_dir_path[len(root):-1]
- yield rel_dir
- if recursive:
- yield from _list_dir_or_file(next_dir_path, list_dir,
- list_file, suffix,
- recursive)
- else: # a file path
- absolute_path = self.join_path(dir_path, path)
- rel_path = absolute_path[len(root):]
- if (suffix is None
- or rel_path.endswith(suffix)) and list_file:
- yield rel_path
-
- return _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive)
-
-
-class MemcachedBackend(BaseStorageBackend):
- """Memcached storage backend.
-
- Attributes:
- server_list_cfg (str): Config file for memcached server list.
- client_cfg (str): Config file for memcached client.
- sys_path (str | None): Additional path to be appended to `sys.path`.
- Default: None.
- """
-
- def __init__(self, server_list_cfg, client_cfg, sys_path=None):
- if sys_path is not None:
- import sys
- sys.path.append(sys_path)
- try:
- import mc
- except ImportError:
- raise ImportError(
- 'Please install memcached to enable MemcachedBackend.')
-
- self.server_list_cfg = server_list_cfg
- self.client_cfg = client_cfg
- self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg,
- self.client_cfg)
- # mc.pyvector servers as a point which points to a memory cache
- self._mc_buffer = mc.pyvector()
-
- def get(self, filepath):
- filepath = str(filepath)
- import mc
- self._client.Get(filepath, self._mc_buffer)
- value_buf = mc.ConvertBuffer(self._mc_buffer)
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class LmdbBackend(BaseStorageBackend):
- """Lmdb storage backend.
-
- Args:
- db_path (str): Lmdb database path.
- readonly (bool, optional): Lmdb environment parameter. If True,
- disallow any write operations. Default: True.
- lock (bool, optional): Lmdb environment parameter. If False, when
- concurrent access occurs, do not lock the database. Default: False.
- readahead (bool, optional): Lmdb environment parameter. If False,
- disable the OS filesystem readahead mechanism, which may improve
- random read performance when a database is larger than RAM.
- Default: False.
-
- Attributes:
- db_path (str): Lmdb database path.
- """
-
- def __init__(self,
- db_path,
- readonly=True,
- lock=False,
- readahead=False,
- **kwargs):
- try:
- import lmdb
- except ImportError:
- raise ImportError('Please install lmdb to enable LmdbBackend.')
-
- self.db_path = str(db_path)
- self._client = lmdb.open(
- self.db_path,
- readonly=readonly,
- lock=lock,
- readahead=readahead,
- **kwargs)
-
- def get(self, filepath):
- """Get values according to the filepath.
-
- Args:
- filepath (str | obj:`Path`): Here, filepath is the lmdb key.
- """
- filepath = str(filepath)
- with self._client.begin(write=False) as txn:
- value_buf = txn.get(filepath.encode('ascii'))
- return value_buf
-
- def get_text(self, filepath, encoding=None):
- raise NotImplementedError
-
-
-class HardDiskBackend(BaseStorageBackend):
- """Raw hard disks storage backend."""
-
- _allow_symlink = True
-
- def get(self, filepath: Union[str, Path]) -> bytes:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- bytes: Expected bytes object.
- """
- with open(filepath, 'rb') as f:
- value_buf = f.read()
- return value_buf
-
- def get_text(self,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- with open(filepath, 'r', encoding=encoding) as f:
- value_buf = f.read()
- return value_buf
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'wb' mode.
-
- Note:
- ``put`` will create a directory if the directory of ``filepath``
- does not exist.
-
- Args:
- obj (bytes): Data to be written.
- filepath (str or Path): Path to write data.
- """
- mmcv.mkdir_or_exist(osp.dirname(filepath))
- with open(filepath, 'wb') as f:
- f.write(obj)
-
- def put_text(self,
- obj: str,
- filepath: Union[str, Path],
- encoding: str = 'utf-8') -> None:
- """Write data to a given ``filepath`` with 'w' mode.
-
- Note:
- ``put_text`` will create a directory if the directory of
- ``filepath`` does not exist.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
- """
- mmcv.mkdir_or_exist(osp.dirname(filepath))
- with open(filepath, 'w', encoding=encoding) as f:
- f.write(obj)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str or Path): Path to be removed.
- """
- os.remove(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- return osp.exists(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- return osp.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- return osp.isfile(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Join one or more filepath components intelligently. The return value
- is the concatenation of filepath and any members of *filepaths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result of concatenation.
- """
- return osp.join(filepath, *filepaths)
-
- @contextmanager
- def get_local_path(
- self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]:
- """Only for unified API and do nothing."""
- yield filepath
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- if list_dir and suffix is not None:
- raise TypeError('`suffix` should be None when `list_dir` is True')
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('`suffix` must be a string or tuple of strings')
-
- root = dir_path
-
- def _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- rel_path = osp.relpath(entry.path, root)
- if (suffix is None
- or rel_path.endswith(suffix)) and list_file:
- yield rel_path
- elif osp.isdir(entry.path):
- if list_dir:
- rel_dir = osp.relpath(entry.path, root)
- yield rel_dir
- if recursive:
- yield from _list_dir_or_file(entry.path, list_dir,
- list_file, suffix,
- recursive)
-
- return _list_dir_or_file(dir_path, list_dir, list_file, suffix,
- recursive)
-
-
-class HTTPBackend(BaseStorageBackend):
- """HTTP and HTTPS storage bachend."""
-
- def get(self, filepath):
- value_buf = urlopen(filepath).read()
- return value_buf
-
- def get_text(self, filepath, encoding='utf-8'):
- value_buf = urlopen(filepath).read()
- return value_buf.decode(encoding)
-
- @contextmanager
- def get_local_path(self, filepath: str) -> Iterable[str]:
- """Download a file from ``filepath``.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Args:
- filepath (str): Download a file from ``filepath``.
-
- Examples:
- >>> client = HTTPBackend()
- >>> # After existing from the ``with`` clause,
- >>> # the path will be removed
- >>> with client.get_local_path('http://path/of/your/file') as path:
- ... # do something here
- """
- try:
- f = tempfile.NamedTemporaryFile(delete=False)
- f.write(self.get(filepath))
- f.close()
- yield f.name
- finally:
- os.remove(f.name)
-
-
-class FileClient:
- """A general file client to access files in different backends.
-
- The client loads a file or text in a specified backend from its path
- and returns it as a binary or text file. There are two ways to choose a
- backend, the name of backend and the prefix of path. Although both of them
- can be used to choose a storage backend, ``backend`` has a higher priority
- that is if they are all set, the storage backend will be chosen by the
- backend argument. If they are all `None`, the disk backend will be chosen.
- Note that It can also register other backend accessor with a given name,
- prefixes, and backend class. In addition, We use the singleton pattern to
- avoid repeated object creation. If the arguments are the same, the same
- object will be returned.
-
- Args:
- backend (str, optional): The storage backend type. Options are "disk",
- "ceph", "memcached", "lmdb", "http" and "petrel". Default: None.
- prefix (str, optional): The prefix of the registered storage backend.
- Options are "s3", "http", "https". Default: None.
-
- Examples:
- >>> # only set backend
- >>> file_client = FileClient(backend='petrel')
- >>> # only set prefix
- >>> file_client = FileClient(prefix='s3')
- >>> # set both backend and prefix but use backend to choose client
- >>> file_client = FileClient(backend='petrel', prefix='s3')
- >>> # if the arguments are the same, the same object is returned
- >>> file_client1 = FileClient(backend='petrel')
- >>> file_client1 is file_client
- True
-
- Attributes:
- client (:obj:`BaseStorageBackend`): The backend object.
- """
-
- _backends = {
- 'disk': HardDiskBackend,
- 'ceph': CephBackend,
- 'memcached': MemcachedBackend,
- 'lmdb': LmdbBackend,
- 'petrel': PetrelBackend,
- 'http': HTTPBackend,
- }
- # This collection is used to record the overridden backends, and when a
- # backend appears in the collection, the singleton pattern is disabled for
- # that backend, because if the singleton pattern is used, then the object
- # returned will be the backend before overwriting
- _overridden_backends = set()
- _prefix_to_backends = {
- 's3': PetrelBackend,
- 'http': HTTPBackend,
- 'https': HTTPBackend,
- }
- _overridden_prefixes = set()
-
- _instances = {}
-
- def __new__(cls, backend=None, prefix=None, **kwargs):
- if backend is None and prefix is None:
- backend = 'disk'
- if backend is not None and backend not in cls._backends:
- raise ValueError(
- f'Backend {backend} is not supported. Currently supported ones'
- f' are {list(cls._backends.keys())}')
- if prefix is not None and prefix not in cls._prefix_to_backends:
- raise ValueError(
- f'prefix {prefix} is not supported. Currently supported ones '
- f'are {list(cls._prefix_to_backends.keys())}')
-
- # concatenate the arguments to a unique key for determining whether
- # objects with the same arguments were created
- arg_key = f'{backend}:{prefix}'
- for key, value in kwargs.items():
- arg_key += f':{key}:{value}'
-
- # if a backend was overridden, it will create a new object
- if (arg_key in cls._instances
- and backend not in cls._overridden_backends
- and prefix not in cls._overridden_prefixes):
- _instance = cls._instances[arg_key]
- else:
- # create a new object and put it to _instance
- _instance = super().__new__(cls)
- if backend is not None:
- _instance.client = cls._backends[backend](**kwargs)
- else:
- _instance.client = cls._prefix_to_backends[prefix](**kwargs)
-
- cls._instances[arg_key] = _instance
-
- return _instance
-
- @property
- def name(self):
- return self.client.name
-
- @property
- def allow_symlink(self):
- return self.client.allow_symlink
-
- @staticmethod
- def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]:
- """Parse the prefix of a uri.
-
- Args:
- uri (str | Path): Uri to be parsed that contains the file prefix.
-
- Examples:
- >>> FileClient.parse_uri_prefix('s3://path/of/your/file')
- 's3'
-
- Returns:
- str | None: Return the prefix of uri if the uri contains '://'
- else ``None``.
- """
- assert is_filepath(uri)
- uri = str(uri)
- if '://' not in uri:
- return None
- else:
- prefix, _ = uri.split('://')
- # In the case of PetrelBackend, the prefix may contains the cluster
- # name like clusterName:s3
- if ':' in prefix:
- _, prefix = prefix.split(':')
- return prefix
-
- @classmethod
- def infer_client(cls,
- file_client_args: Optional[dict] = None,
- uri: Optional[Union[str, Path]] = None) -> 'FileClient':
- """Infer a suitable file client based on the URI and arguments.
-
- Args:
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. Default: None.
- uri (str | Path, optional): Uri to be parsed that contains the file
- prefix. Default: None.
-
- Examples:
- >>> uri = 's3://path/of/your/file'
- >>> file_client = FileClient.infer_client(uri=uri)
- >>> file_client_args = {'backend': 'petrel'}
- >>> file_client = FileClient.infer_client(file_client_args)
-
- Returns:
- FileClient: Instantiated FileClient object.
- """
- assert file_client_args is not None or uri is not None
- if file_client_args is None:
- file_prefix = cls.parse_uri_prefix(uri) # type: ignore
- return cls(prefix=file_prefix)
- else:
- return cls(**file_client_args)
-
- @classmethod
- def _register_backend(cls, name, backend, force=False, prefixes=None):
- if not isinstance(name, str):
- raise TypeError('the backend name should be a string, '
- f'but got {type(name)}')
- if not inspect.isclass(backend):
- raise TypeError(
- f'backend should be a class but got {type(backend)}')
- if not issubclass(backend, BaseStorageBackend):
- raise TypeError(
- f'backend {backend} is not a subclass of BaseStorageBackend')
- if not force and name in cls._backends:
- raise KeyError(
- f'{name} is already registered as a storage backend, '
- 'add "force=True" if you want to override it')
-
- if name in cls._backends and force:
- cls._overridden_backends.add(name)
- cls._backends[name] = backend
-
- if prefixes is not None:
- if isinstance(prefixes, str):
- prefixes = [prefixes]
- else:
- assert isinstance(prefixes, (list, tuple))
- for prefix in prefixes:
- if prefix not in cls._prefix_to_backends:
- cls._prefix_to_backends[prefix] = backend
- elif (prefix in cls._prefix_to_backends) and force:
- cls._overridden_prefixes.add(prefix)
- cls._prefix_to_backends[prefix] = backend
- else:
- raise KeyError(
- f'{prefix} is already registered as a storage backend,'
- ' add "force=True" if you want to override it')
-
- @classmethod
- def register_backend(cls, name, backend=None, force=False, prefixes=None):
- """Register a backend to FileClient.
-
- This method can be used as a normal class method or a decorator.
-
- .. code-block:: python
-
- class NewBackend(BaseStorageBackend):
-
- def get(self, filepath):
- return filepath
-
- def get_text(self, filepath):
- return filepath
-
- FileClient.register_backend('new', NewBackend)
-
- or
-
- .. code-block:: python
-
- @FileClient.register_backend('new')
- class NewBackend(BaseStorageBackend):
-
- def get(self, filepath):
- return filepath
-
- def get_text(self, filepath):
- return filepath
-
- Args:
- name (str): The name of the registered backend.
- backend (class, optional): The backend class to be registered,
- which must be a subclass of :class:`BaseStorageBackend`.
- When this method is used as a decorator, backend is None.
- Defaults to None.
- force (bool, optional): Whether to override the backend if the name
- has already been registered. Defaults to False.
- prefixes (str or list[str] or tuple[str], optional): The prefixes
- of the registered storage backend. Default: None.
- `New in version 1.3.15.`
- """
- if backend is not None:
- cls._register_backend(
- name, backend, force=force, prefixes=prefixes)
- return
-
- def _register(backend_cls):
- cls._register_backend(
- name, backend_cls, force=force, prefixes=prefixes)
- return backend_cls
-
- return _register
-
- def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]:
- """Read data from a given ``filepath`` with 'rb' mode.
-
- Note:
- There are two types of return values for ``get``, one is ``bytes``
- and the other is ``memoryview``. The advantage of using memoryview
- is that you can avoid copying, and if you want to convert it to
- ``bytes``, you can use ``.tobytes()``.
-
- Args:
- filepath (str or Path): Path to read data.
-
- Returns:
- bytes | memoryview: Expected bytes object or a memory view of the
- bytes object.
- """
- return self.client.get(filepath)
-
- def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str:
- """Read data from a given ``filepath`` with 'r' mode.
-
- Args:
- filepath (str or Path): Path to read data.
- encoding (str): The encoding format used to open the ``filepath``.
- Default: 'utf-8'.
-
- Returns:
- str: Expected text reading from ``filepath``.
- """
- return self.client.get_text(filepath, encoding)
-
- def put(self, obj: bytes, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'wb' mode.
-
- Note:
- ``put`` should create a directory if the directory of ``filepath``
- does not exist.
-
- Args:
- obj (bytes): Data to be written.
- filepath (str or Path): Path to write data.
- """
- self.client.put(obj, filepath)
-
- def put_text(self, obj: str, filepath: Union[str, Path]) -> None:
- """Write data to a given ``filepath`` with 'w' mode.
-
- Note:
- ``put_text`` should create a directory if the directory of
- ``filepath`` does not exist.
-
- Args:
- obj (str): Data to be written.
- filepath (str or Path): Path to write data.
- encoding (str, optional): The encoding format used to open the
- `filepath`. Default: 'utf-8'.
- """
- self.client.put_text(obj, filepath)
-
- def remove(self, filepath: Union[str, Path]) -> None:
- """Remove a file.
-
- Args:
- filepath (str, Path): Path to be removed.
- """
- self.client.remove(filepath)
-
- def exists(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path exists.
-
- Args:
- filepath (str or Path): Path to be checked whether exists.
-
- Returns:
- bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise.
- """
- return self.client.exists(filepath)
-
- def isdir(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a directory.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a
- directory.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a directory,
- ``False`` otherwise.
- """
- return self.client.isdir(filepath)
-
- def isfile(self, filepath: Union[str, Path]) -> bool:
- """Check whether a file path is a file.
-
- Args:
- filepath (str or Path): Path to be checked whether it is a file.
-
- Returns:
- bool: Return ``True`` if ``filepath`` points to a file, ``False``
- otherwise.
- """
- return self.client.isfile(filepath)
-
- def join_path(self, filepath: Union[str, Path],
- *filepaths: Union[str, Path]) -> str:
- """Concatenate all file paths.
-
- Join one or more filepath components intelligently. The return value
- is the concatenation of filepath and any members of *filepaths.
-
- Args:
- filepath (str or Path): Path to be concatenated.
-
- Returns:
- str: The result of concatenation.
- """
- return self.client.join_path(filepath, *filepaths)
-
- @contextmanager
- def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]:
- """Download data from ``filepath`` and write the data to local path.
-
- ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It
- can be called with ``with`` statement, and when exists from the
- ``with`` statement, the temporary path will be released.
-
- Note:
- If the ``filepath`` is a local path, just return itself.
-
- .. warning::
- ``get_local_path`` is an experimental interface that may change in
- the future.
-
- Args:
- filepath (str or Path): Path to be read data.
-
- Examples:
- >>> file_client = FileClient(prefix='s3')
- >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path:
- ... # do something here
-
- Yields:
- Iterable[str]: Only yield one path.
- """
- with self.client.get_local_path(str(filepath)) as local_path:
- yield local_path
-
- def list_dir_or_file(self,
- dir_path: Union[str, Path],
- list_dir: bool = True,
- list_file: bool = True,
- suffix: Optional[Union[str, Tuple[str]]] = None,
- recursive: bool = False) -> Iterator[str]:
- """Scan a directory to find the interested directories or files in
- arbitrary order.
-
- Note:
- :meth:`list_dir_or_file` returns the path relative to ``dir_path``.
-
- Args:
- dir_path (str | Path): Path of the directory.
- list_dir (bool): List the directories. Default: True.
- list_file (bool): List the path of files. Default: True.
- suffix (str or tuple[str], optional): File suffix
- that we are interested in. Default: None.
- recursive (bool): If set to True, recursively scan the
- directory. Default: False.
-
- Yields:
- Iterable[str]: A relative path to ``dir_path``.
- """
- yield from self.client.list_dir_or_file(dir_path, list_dir, list_file,
- suffix, recursive)
diff --git a/spaces/MiloSobral/PortiloopDemo/README.md b/spaces/MiloSobral/PortiloopDemo/README.md
deleted file mode 100644
index 01feab3f62986ebe2d227fee97cdd5ce33ef759b..0000000000000000000000000000000000000000
--- a/spaces/MiloSobral/PortiloopDemo/README.md
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Portiloop Demo
-emoji: 💤
-colorFrom: blue
-colorTo: grey
-sdk: gradio
-sdk_version: 3.12.0
-app_file: portiloop/src/demo/demo.py
-pinned: false
----
-
-# Portiloop software
-
-This software works with the [Coral implementation](https://github.com/Portiloop/portiloop-hardware) of the `Portiloop` EEG closed-loop stimulation device.
-
-It enables controlling the `Portiloop` from a simple Graphical User Interface (GUI).
-
-## Quick links
-- [Installation on the Portiloop](#installation)
-- [GUI usage](#usage)
-
-## Usage:
-
-The `Portiloop` GUI is a web-based interface running as a `jupyter` server.
-
-- Connect to the `Portiloop` WiFi network.
-- Open your favorite web browser
-- Enter the following address: `192.168.0.1:9000`
-
-You should now be connected to the `jupyter` server.
-
-_If the jupyter notebook is not yet created:_
-- Hit `New` and select `Python 3`.
-
-This creates a `jupyter` notebook, in which you can simply paste and execute te following:
-
-```python
-from portiloop.capture import Capture
-
-cap = Capture()
-```
-
-_When the jupyter notebook is created:_
-
-You can open the notebook and simply execute the cell.
-
-The GUI now looks like this:
-
-
-
-### Channels:
-
-The `Channels` pannel enables you to configure each electrode:
-- `disabled`: the electrode is not used
-- `simple`: the electrode is simply used to measure signal (not recommended)
-- `with bias`: the electrode is used to measure signal and to compute a bias ("ground") signal
-- `bias out`: the electrode is used to output the bias ("ground") signal
-
-### General controls:
-
-- `Freq` is the desired sampling rate
-- `Time` is the maximum duration of the experiment (you can also stop the experiment manually)
-- `Recording` is the name of the `.edf` output file if you wish to record the signal locally
-- Tick `Filter` to enable the online filtering pipeline
-- Tick `Detect` to enable the online detection pipeline
-- Tick `Stimulate` to enable the online stimulation pipeline
-- Tick `Record EDF` to record the signal in the file designated in `Recording`
-- Tick `Stream LSL` to broadcast the signal on the local network via [LSL](https://labstreaminglayer.readthedocs.io/info/intro.html)
-- Tick `Display` to display the signal in the GUI
-- `Threshold` enables customizing the optional detection threshold from the GUI (e.g., for classifiers)
-- The `Clock` widget lets you select the sampling method:
- - `Coral` sets the `ADS1299` sampling rate to twice your target sampling rate, and uses the Coral Real-Time clock to stick to your target sampling rate
- - `ADS` sets the `ADS1299` sampling rate to the closest compatible to your target sampling rate and uses the ADS interrupts
-
-### Custom Filtering
-
-The `Filtering` section lets you customize the filtering pipeline from the GUI.
-
-- The `FIR filter` switch lets you select between the default low-pass FIR filter (used in the Portiloop [paper](https://arxiv.org/abs/2107.13473)), or customize this filter according to your needs (`FIR order` and `FIR cutoff`)
-- `Polyak mean`, `Polyak std` and `Epsilon` let you customize the online standardization pipeline, which also acts as a high-pass filter
-
-### Capture
-
-The `Capture` switch lets you start and stop the experiment at any point in time
-
-_Note: once the experiment is started, all widgets are deactivated until you stop the experiment._
-
-## Installation:
-
-Follow these instruction if the software is not readily installed on your `Portiloop` device.
-
-### Install the library:
-
-_(Requires python 3)_
-
-#### Install the following libraries from apt to avoid issues:
-- `sudo apt install python3-numpy`
-- `sudo apt install python3-scipy`
-- `sudo apt install python3-pycoral`
-- Clone this repository on the `Coral` board
-- `cd` to he root of the repository where the `setup.py` file is located
-- Execute `pip3 install -e .`
-
-### Setup the Coral board as a wifi access point
-
-You can find instructions [here](https://www.linux.com/training-tutorials/create-secure-linux-based-wireless-access-point/) to set Linux as a WiFi access point.
-
-### Setup a jupyter server:
-
-- On your `Portiloop` device, execute `pip3 install notebook`
-- Generate a `jupyter` password and copy the result:
-```python
-from notebook.auth import passwd
-passwd()
-```
-- Execute `jupyter notebook --generate-config`
-- `cd` to the `.jupyter` folder and edit `jupyter_notebook_config.py`
-- Find the relevant lines, and uncomment them while setting the following values:
- - `c.NotebookApp.ip = '*'`
- - `c.NotebookApp.open_browser = False`
- - `c.NotebookApp.password = u'your_generated_password_here'`
- - `c.NotebookApp.port = 9000`
-
-### Setup a service for your jupyter server to start automatically:
-
-- `cd /etc/systemd/system`
-- create an empty file named `notebook.service` and open it.
-- paste the following and save:
-```bash
-[Unit]
-Description=Autostarts jupyter server
-
-[Service]
-User=mendel
-WorkingDirectory=~
-ExecStart=jupyter notebook
-Restart=always
-
-[Install]
-WantedBy=multi-user.target
-```
-- Execute `sudo systemctl daemon-reload`
-- Execute `sudo systemctl start notebook.service`
-- Check that your service is up and running: `sudo systemctl status notebook.service`
diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/misc.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/misc.py
deleted file mode 100644
index d64b84ef24bea0c98e76824feb1903f6bfebe7a5..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/misc.py
+++ /dev/null
@@ -1,717 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-import colorsys
-import datetime
-import functools
-import io
-import json
-import os
-import pickle
-import subprocess
-import time
-from collections import OrderedDict, defaultdict, deque
-from typing import List, Optional
-
-import numpy as np
-import torch
-import torch.distributed as dist
-
-# needed due to empty tensor bug in pytorch and torchvision 0.5
-import torchvision
-from torch import Tensor
-
-__torchvision_need_compat_flag = float(torchvision.__version__.split(".")[1]) < 7
-if __torchvision_need_compat_flag:
- from torchvision.ops import _new_empty_tensor
- from torchvision.ops.misc import _output_size
-
-
-class SmoothedValue(object):
- """Track a series of values and provide access to smoothed values over a
- window or the global series average.
- """
-
- def __init__(self, window_size=20, fmt=None):
- if fmt is None:
- fmt = "{median:.4f} ({global_avg:.4f})"
- self.deque = deque(maxlen=window_size)
- self.total = 0.0
- self.count = 0
- self.fmt = fmt
-
- def update(self, value, n=1):
- self.deque.append(value)
- self.count += n
- self.total += value * n
-
- def synchronize_between_processes(self):
- """
- Warning: does not synchronize the deque!
- """
- if not is_dist_avail_and_initialized():
- return
- t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda")
- dist.barrier()
- dist.all_reduce(t)
- t = t.tolist()
- self.count = int(t[0])
- self.total = t[1]
-
- @property
- def median(self):
- d = torch.tensor(list(self.deque))
- if d.shape[0] == 0:
- return 0
- return d.median().item()
-
- @property
- def avg(self):
- d = torch.tensor(list(self.deque), dtype=torch.float32)
- return d.mean().item()
-
- @property
- def global_avg(self):
- if os.environ.get("SHILONG_AMP", None) == "1":
- eps = 1e-4
- else:
- eps = 1e-6
- return self.total / (self.count + eps)
-
- @property
- def max(self):
- return max(self.deque)
-
- @property
- def value(self):
- return self.deque[-1]
-
- def __str__(self):
- return self.fmt.format(
- median=self.median,
- avg=self.avg,
- global_avg=self.global_avg,
- max=self.max,
- value=self.value,
- )
-
-
-@functools.lru_cache()
-def _get_global_gloo_group():
- """
- Return a process group based on gloo backend, containing all the ranks
- The result is cached.
- """
-
- if dist.get_backend() == "nccl":
- return dist.new_group(backend="gloo")
-
- return dist.group.WORLD
-
-
-def all_gather_cpu(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
-
- world_size = get_world_size()
- if world_size == 1:
- return [data]
-
- cpu_group = _get_global_gloo_group()
-
- buffer = io.BytesIO()
- torch.save(data, buffer)
- data_view = buffer.getbuffer()
- device = "cuda" if cpu_group is None else "cpu"
- tensor = torch.ByteTensor(data_view).to(device)
-
- # obtain Tensor size of each rank
- local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long)
- size_list = [torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size)]
- if cpu_group is None:
- dist.all_gather(size_list, local_size)
- else:
- print("gathering on cpu")
- dist.all_gather(size_list, local_size, group=cpu_group)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
- assert isinstance(local_size.item(), int)
- local_size = int(local_size.item())
-
- # receiving Tensor from all ranks
- # we pad the tensor because torch all_gather does not support
- # gathering tensors of different shapes
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device))
- if local_size != max_size:
- padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device=device)
- tensor = torch.cat((tensor, padding), dim=0)
- if cpu_group is None:
- dist.all_gather(tensor_list, tensor)
- else:
- dist.all_gather(tensor_list, tensor, group=cpu_group)
-
- data_list = []
- for size, tensor in zip(size_list, tensor_list):
- tensor = torch.split(tensor, [size, max_size - size], dim=0)[0]
- buffer = io.BytesIO(tensor.cpu().numpy())
- obj = torch.load(buffer)
- data_list.append(obj)
-
- return data_list
-
-
-def all_gather(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
-
- if os.getenv("CPU_REDUCE") == "1":
- return all_gather_cpu(data)
-
- world_size = get_world_size()
- if world_size == 1:
- return [data]
-
- # serialized to a Tensor
- buffer = pickle.dumps(data)
- storage = torch.ByteStorage.from_buffer(buffer)
- tensor = torch.ByteTensor(storage).to("cuda")
-
- # obtain Tensor size of each rank
- local_size = torch.tensor([tensor.numel()], device="cuda")
- size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)]
- dist.all_gather(size_list, local_size)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
-
- # receiving Tensor from all ranks
- # we pad the tensor because torch all_gather does not support
- # gathering tensors of different shapes
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda"))
- if local_size != max_size:
- padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda")
- tensor = torch.cat((tensor, padding), dim=0)
- dist.all_gather(tensor_list, tensor)
-
- data_list = []
- for size, tensor in zip(size_list, tensor_list):
- buffer = tensor.cpu().numpy().tobytes()[:size]
- data_list.append(pickle.loads(buffer))
-
- return data_list
-
-
-def reduce_dict(input_dict, average=True):
- """
- Args:
- input_dict (dict): all the values will be reduced
- average (bool): whether to do average or sum
- Reduce the values in the dictionary from all processes so that all processes
- have the averaged results. Returns a dict with the same fields as
- input_dict, after reduction.
- """
- world_size = get_world_size()
- if world_size < 2:
- return input_dict
- with torch.no_grad():
- names = []
- values = []
- # sort the keys so that they are consistent across processes
- for k in sorted(input_dict.keys()):
- names.append(k)
- values.append(input_dict[k])
- values = torch.stack(values, dim=0)
- dist.all_reduce(values)
- if average:
- values /= world_size
- reduced_dict = {k: v for k, v in zip(names, values)}
- return reduced_dict
-
-
-class MetricLogger(object):
- def __init__(self, delimiter="\t"):
- self.meters = defaultdict(SmoothedValue)
- self.delimiter = delimiter
-
- def update(self, **kwargs):
- for k, v in kwargs.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- assert isinstance(v, (float, int))
- self.meters[k].update(v)
-
- def __getattr__(self, attr):
- if attr in self.meters:
- return self.meters[attr]
- if attr in self.__dict__:
- return self.__dict__[attr]
- raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, attr))
-
- def __str__(self):
- loss_str = []
- for name, meter in self.meters.items():
- # print(name, str(meter))
- # import ipdb;ipdb.set_trace()
- if meter.count > 0:
- loss_str.append("{}: {}".format(name, str(meter)))
- return self.delimiter.join(loss_str)
-
- def synchronize_between_processes(self):
- for meter in self.meters.values():
- meter.synchronize_between_processes()
-
- def add_meter(self, name, meter):
- self.meters[name] = meter
-
- def log_every(self, iterable, print_freq, header=None, logger=None):
- if logger is None:
- print_func = print
- else:
- print_func = logger.info
-
- i = 0
- if not header:
- header = ""
- start_time = time.time()
- end = time.time()
- iter_time = SmoothedValue(fmt="{avg:.4f}")
- data_time = SmoothedValue(fmt="{avg:.4f}")
- space_fmt = ":" + str(len(str(len(iterable)))) + "d"
- if torch.cuda.is_available():
- log_msg = self.delimiter.join(
- [
- header,
- "[{0" + space_fmt + "}/{1}]",
- "eta: {eta}",
- "{meters}",
- "time: {time}",
- "data: {data}",
- "max mem: {memory:.0f}",
- ]
- )
- else:
- log_msg = self.delimiter.join(
- [
- header,
- "[{0" + space_fmt + "}/{1}]",
- "eta: {eta}",
- "{meters}",
- "time: {time}",
- "data: {data}",
- ]
- )
- MB = 1024.0 * 1024.0
- for obj in iterable:
- data_time.update(time.time() - end)
- yield obj
- # import ipdb; ipdb.set_trace()
- iter_time.update(time.time() - end)
- if i % print_freq == 0 or i == len(iterable) - 1:
- eta_seconds = iter_time.global_avg * (len(iterable) - i)
- eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
- if torch.cuda.is_available():
- print_func(
- log_msg.format(
- i,
- len(iterable),
- eta=eta_string,
- meters=str(self),
- time=str(iter_time),
- data=str(data_time),
- memory=torch.cuda.max_memory_allocated() / MB,
- )
- )
- else:
- print_func(
- log_msg.format(
- i,
- len(iterable),
- eta=eta_string,
- meters=str(self),
- time=str(iter_time),
- data=str(data_time),
- )
- )
- i += 1
- end = time.time()
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print_func(
- "{} Total time: {} ({:.4f} s / it)".format(
- header, total_time_str, total_time / len(iterable)
- )
- )
-
-
-def get_sha():
- cwd = os.path.dirname(os.path.abspath(__file__))
-
- def _run(command):
- return subprocess.check_output(command, cwd=cwd).decode("ascii").strip()
-
- sha = "N/A"
- diff = "clean"
- branch = "N/A"
- try:
- sha = _run(["git", "rev-parse", "HEAD"])
- subprocess.check_output(["git", "diff"], cwd=cwd)
- diff = _run(["git", "diff-index", "HEAD"])
- diff = "has uncommited changes" if diff else "clean"
- branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"])
- except Exception:
- pass
- message = f"sha: {sha}, status: {diff}, branch: {branch}"
- return message
-
-
-def collate_fn(batch):
- # import ipdb; ipdb.set_trace()
- batch = list(zip(*batch))
- batch[0] = nested_tensor_from_tensor_list(batch[0])
- return tuple(batch)
-
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
- if mask == "auto":
- self.mask = torch.zeros_like(tensors).to(tensors.device)
- if self.mask.dim() == 3:
- self.mask = self.mask.sum(0).to(bool)
- elif self.mask.dim() == 4:
- self.mask = self.mask.sum(1).to(bool)
- else:
- raise ValueError(
- "tensors dim must be 3 or 4 but {}({})".format(
- self.tensors.dim(), self.tensors.shape
- )
- )
-
- def imgsize(self):
- res = []
- for i in range(self.tensors.shape[0]):
- mask = self.mask[i]
- maxH = (~mask).sum(0).max()
- maxW = (~mask).sum(1).max()
- res.append(torch.Tensor([maxH, maxW]))
- return res
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def to_img_list_single(self, tensor, mask):
- assert tensor.dim() == 3, "dim of tensor should be 3 but {}".format(tensor.dim())
- maxH = (~mask).sum(0).max()
- maxW = (~mask).sum(1).max()
- img = tensor[:, :maxH, :maxW]
- return img
-
- def to_img_list(self):
- """remove the padding and convert to img list
-
- Returns:
- [type]: [description]
- """
- if self.tensors.dim() == 3:
- return self.to_img_list_single(self.tensors, self.mask)
- else:
- res = []
- for i in range(self.tensors.shape[0]):
- tensor_i = self.tensors[i]
- mask_i = self.mask[i]
- res.append(self.to_img_list_single(tensor_i, mask_i))
- return res
-
- @property
- def device(self):
- return self.tensors.device
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
- @property
- def shape(self):
- return {"tensors.shape": self.tensors.shape, "mask.shape": self.mask.shape}
-
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- else:
- raise ValueError("not supported")
- return NestedTensor(tensor, mask)
-
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def setup_for_distributed(is_master):
- """
- This function disables printing when not in master process
- """
- import builtins as __builtin__
-
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs):
- force = kwargs.pop("force", False)
- if is_master or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
-
-
-def get_world_size():
- if not is_dist_avail_and_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank():
- if not is_dist_avail_and_initialized():
- return 0
- return dist.get_rank()
-
-
-def is_main_process():
- return get_rank() == 0
-
-
-def save_on_master(*args, **kwargs):
- if is_main_process():
- torch.save(*args, **kwargs)
-
-
-def init_distributed_mode(args):
- if "WORLD_SIZE" in os.environ and os.environ["WORLD_SIZE"] != "": # 'RANK' in os.environ and
- args.rank = int(os.environ["RANK"])
- args.world_size = int(os.environ["WORLD_SIZE"])
- args.gpu = args.local_rank = int(os.environ["LOCAL_RANK"])
-
- # launch by torch.distributed.launch
- # Single node
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 1 --rank 0 ...
- # Multi nodes
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 0 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ...
- # python -m torch.distributed.launch --nproc_per_node=8 main.py --world-size 2 --rank 1 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' ...
- # args.rank = int(os.environ.get('OMPI_COMM_WORLD_RANK'))
- # local_world_size = int(os.environ['GPU_PER_NODE_COUNT'])
- # args.world_size = args.world_size * local_world_size
- # args.gpu = args.local_rank = int(os.environ['LOCAL_RANK'])
- # args.rank = args.rank * local_world_size + args.local_rank
- print(
- "world size: {}, rank: {}, local rank: {}".format(
- args.world_size, args.rank, args.local_rank
- )
- )
- print(json.dumps(dict(os.environ), indent=2))
- elif "SLURM_PROCID" in os.environ:
- args.rank = int(os.environ["SLURM_PROCID"])
- args.gpu = args.local_rank = int(os.environ["SLURM_LOCALID"])
- args.world_size = int(os.environ["SLURM_NPROCS"])
-
- print(
- "world size: {}, world rank: {}, local rank: {}, device_count: {}".format(
- args.world_size, args.rank, args.local_rank, torch.cuda.device_count()
- )
- )
- else:
- print("Not using distributed mode")
- args.distributed = False
- args.world_size = 1
- args.rank = 0
- args.local_rank = 0
- return
-
- print("world_size:{} rank:{} local_rank:{}".format(args.world_size, args.rank, args.local_rank))
- args.distributed = True
- torch.cuda.set_device(args.local_rank)
- args.dist_backend = "nccl"
- print("| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True)
-
- torch.distributed.init_process_group(
- backend=args.dist_backend,
- world_size=args.world_size,
- rank=args.rank,
- init_method=args.dist_url,
- )
-
- print("Before torch.distributed.barrier()")
- torch.distributed.barrier()
- print("End torch.distributed.barrier()")
- setup_for_distributed(args.rank == 0)
-
-
-@torch.no_grad()
-def accuracy(output, target, topk=(1,)):
- """Computes the precision@k for the specified values of k"""
- if target.numel() == 0:
- return [torch.zeros([], device=output.device)]
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].view(-1).float().sum(0)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-@torch.no_grad()
-def accuracy_onehot(pred, gt):
- """_summary_
-
- Args:
- pred (_type_): n, c
- gt (_type_): n, c
- """
- tp = ((pred - gt).abs().sum(-1) < 1e-4).float().sum()
- acc = tp / gt.shape[0] * 100
- return acc
-
-
-def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None):
- # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor
- """
- Equivalent to nn.functional.interpolate, but with support for empty batch sizes.
- This will eventually be supported natively by PyTorch, and this
- class can go away.
- """
- if __torchvision_need_compat_flag < 0.7:
- if input.numel() > 0:
- return torch.nn.functional.interpolate(input, size, scale_factor, mode, align_corners)
-
- output_shape = _output_size(2, input, size, scale_factor)
- output_shape = list(input.shape[:-2]) + list(output_shape)
- return _new_empty_tensor(input, output_shape)
- else:
- return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)
-
-
-class color_sys:
- def __init__(self, num_colors) -> None:
- self.num_colors = num_colors
- colors = []
- for i in np.arange(0.0, 360.0, 360.0 / num_colors):
- hue = i / 360.0
- lightness = (50 + np.random.rand() * 10) / 100.0
- saturation = (90 + np.random.rand() * 10) / 100.0
- colors.append(
- tuple([int(j * 255) for j in colorsys.hls_to_rgb(hue, lightness, saturation)])
- )
- self.colors = colors
-
- def __call__(self, idx):
- return self.colors[idx]
-
-
-def inverse_sigmoid(x, eps=1e-3):
- x = x.clamp(min=0, max=1)
- x1 = x.clamp(min=eps)
- x2 = (1 - x).clamp(min=eps)
- return torch.log(x1 / x2)
-
-
-def clean_state_dict(state_dict):
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- if k[:7] == "module.":
- k = k[7:] # remove `module.`
- new_state_dict[k] = v
- return new_state_dict
diff --git a/spaces/MirageML/sjc/sd1/merge_embeddings.py b/spaces/MirageML/sjc/sd1/merge_embeddings.py
deleted file mode 100644
index 61d90786957c3f32bfdade0d31e1769a58f3e85a..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/merge_embeddings.py
+++ /dev/null
@@ -1,111 +0,0 @@
-from ldm.modules.encoders.modules import FrozenCLIPEmbedder, BERTEmbedder
-from ldm.modules.embedding_manager import EmbeddingManager
-
-import argparse, os
-from functools import partial
-
-import torch
-
-def get_placeholder_loop(placeholder_string, embedder, is_sd):
-
- new_placeholder = None
-
- while True:
- if new_placeholder is None:
- new_placeholder = input(f"Placeholder string {placeholder_string} was already used. Please enter a replacement string: ")
- else:
- new_placeholder = input(f"Placeholder string '{new_placeholder}' maps to more than a single token. Please enter another string: ")
-
- token = get_clip_token_for_string(embedder.tokenizer, new_placeholder) if is_sd else get_bert_token_for_string(embedder.tknz_fn, new_placeholder)
-
- if token is not None:
- return new_placeholder, token
-
-def get_clip_token_for_string(tokenizer, string):
- batch_encoding = tokenizer(string, truncation=True, max_length=77, return_length=True,
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
- tokens = batch_encoding["input_ids"]
-
- if torch.count_nonzero(tokens - 49407) == 2:
- return tokens[0, 1]
-
- return None
-
-def get_bert_token_for_string(tokenizer, string):
- token = tokenizer(string)
- if torch.count_nonzero(token) == 3:
- return token[0, 1]
-
- return None
-
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--manager_ckpts",
- type=str,
- nargs="+",
- required=True,
- help="Paths to a set of embedding managers to be merged."
- )
-
- parser.add_argument(
- "--output_path",
- type=str,
- required=True,
- help="Output path for the merged manager",
- )
-
- parser.add_argument(
- "-sd", "--stable_diffusion",
- action="store_true",
- help="Flag to denote that we are merging stable diffusion embeddings"
- )
-
- args = parser.parse_args()
-
- if args.stable_diffusion:
- embedder = FrozenCLIPEmbedder().cuda()
- else:
- embedder = BERTEmbedder(n_embed=1280, n_layer=32).cuda()
-
- EmbeddingManager = partial(EmbeddingManager, embedder, ["*"])
-
- string_to_token_dict = {}
- string_to_param_dict = torch.nn.ParameterDict()
-
- placeholder_to_src = {}
-
- for manager_ckpt in args.manager_ckpts:
- print(f"Parsing {manager_ckpt}...")
-
- manager = EmbeddingManager()
- manager.load(manager_ckpt)
-
- for placeholder_string in manager.string_to_token_dict:
- if not placeholder_string in string_to_token_dict:
- string_to_token_dict[placeholder_string] = manager.string_to_token_dict[placeholder_string]
- string_to_param_dict[placeholder_string] = manager.string_to_param_dict[placeholder_string]
-
- placeholder_to_src[placeholder_string] = manager_ckpt
- else:
- new_placeholder, new_token = get_placeholder_loop(placeholder_string, embedder, is_sd=args.stable_diffusion)
- string_to_token_dict[new_placeholder] = new_token
- string_to_param_dict[new_placeholder] = manager.string_to_param_dict[placeholder_string]
-
- placeholder_to_src[new_placeholder] = manager_ckpt
-
- print("Saving combined manager...")
- merged_manager = EmbeddingManager()
- merged_manager.string_to_param_dict = string_to_param_dict
- merged_manager.string_to_token_dict = string_to_token_dict
- merged_manager.save(args.output_path)
-
- print("Managers merged. Final list of placeholders: ")
- print(placeholder_to_src)
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/encoder_decoder_recognizer_tta.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/encoder_decoder_recognizer_tta.py
deleted file mode 100644
index 6ee7aa1c464e2d9efefd8d8cd50a3d4cf4c2ed50..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/encoder_decoder_recognizer_tta.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List
-
-import numpy as np
-from mmengine.model import BaseTTAModel
-
-from mmocr.registry import MODELS
-from mmocr.utils.typing_utils import RecSampleList
-
-
-@MODELS.register_module()
-class EncoderDecoderRecognizerTTAModel(BaseTTAModel):
- """Merge augmented recognition results. It will select the best result
- according average scores from all augmented results.
-
- Examples:
- >>> tta_model = dict(
- >>> type='EncoderDecoderRecognizerTTAModel')
- >>>
- >>> tta_pipeline = [
- >>> dict(
- >>> type='LoadImageFromFile',
- >>> color_type='grayscale'),
- >>> dict(
- >>> type='TestTimeAug',
- >>> transforms=[
- >>> [
- >>> dict(
- >>> type='ConditionApply',
- >>> true_transforms=[
- >>> dict(
- >>> type='ImgAugWrapper',
- >>> args=[dict(cls='Rot90', k=0, keep_size=False)]) # noqa: E501
- >>> ],
- >>> condition="results['img_shape'][1]>> ),
- >>> dict(
- >>> type='ConditionApply',
- >>> true_transforms=[
- >>> dict(
- >>> type='ImgAugWrapper',
- >>> args=[dict(cls='Rot90', k=1, keep_size=False)]) # noqa: E501
- >>> ],
- >>> condition="results['img_shape'][1]>> ),
- >>> dict(
- >>> type='ConditionApply',
- >>> true_transforms=[
- >>> dict(
- >>> type='ImgAugWrapper',
- >>> args=[dict(cls='Rot90', k=3, keep_size=False)])
- >>> ],
- >>> condition="results['img_shape'][1]>> ),
- >>> ],
- >>> [
- >>> dict(
- >>> type='RescaleToHeight',
- >>> height=32,
- >>> min_width=32,
- >>> max_width=None,
- >>> width_divisor=16)
- >>> ],
- >>> # add loading annotation after ``Resize`` because ground truth
- >>> # does not need to do resize data transform
- >>> [dict(type='LoadOCRAnnotations', with_text=True)],
- >>> [
- >>> dict(
- >>> type='PackTextRecogInputs',
- >>> meta_keys=('img_path', 'ori_shape', 'img_shape',
- >>> 'valid_ratio'))
- >>> ]
- >>> ])
- >>> ]
- """
-
- def merge_preds(self,
- data_samples_list: List[RecSampleList]) -> RecSampleList:
- """Merge predictions of enhanced data to one prediction.
-
- Args:
- data_samples_list (List[RecSampleList]): List of predictions of
- all enhanced data. The shape of data_samples_list is (B, M),
- where B is the batch size and M is the number of augmented
- data.
-
- Returns:
- RecSampleList: Merged prediction.
- """
- predictions = list()
- for data_samples in data_samples_list:
- scores = [
- data_sample.pred_text.score for data_sample in data_samples
- ]
- average_scores = np.array(
- [sum(score) / max(1, len(score)) for score in scores])
- max_idx = np.argmax(average_scores)
- predictions.append(data_samples[max_idx])
- return predictions
diff --git a/spaces/NATSpeech/DiffSpeech/utils/plot/plot.py b/spaces/NATSpeech/DiffSpeech/utils/plot/plot.py
deleted file mode 100644
index 9d7fc02cef69fa5517228437156e687ca054efc8..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/utils/plot/plot.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import matplotlib
-
-matplotlib.use('Agg')
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-LINE_COLORS = ['w', 'r', 'orange', 'k', 'cyan', 'm', 'b', 'lime', 'g', 'brown', 'navy']
-
-
-def spec_to_figure(spec, vmin=None, vmax=None, title='', f0s=None, dur_info=None):
- if isinstance(spec, torch.Tensor):
- spec = spec.cpu().numpy()
- H = spec.shape[1] // 2
- fig = plt.figure(figsize=(12, 6))
- plt.title(title)
- plt.pcolor(spec.T, vmin=vmin, vmax=vmax)
- if dur_info is not None:
- assert isinstance(dur_info, dict)
- txt = dur_info['txt']
- dur_gt = dur_info['dur_gt']
- if isinstance(dur_gt, torch.Tensor):
- dur_gt = dur_gt.cpu().numpy()
- dur_gt = np.cumsum(dur_gt).astype(int)
- for i in range(len(dur_gt)):
- shift = (i % 8) + 1
- plt.text(dur_gt[i], shift * 4, txt[i])
- plt.vlines(dur_gt[i], 0, H // 2, colors='b') # blue is gt
- plt.xlim(0, dur_gt[-1])
- if 'dur_pred' in dur_info:
- dur_pred = dur_info['dur_pred']
- if isinstance(dur_pred, torch.Tensor):
- dur_pred = dur_pred.cpu().numpy()
- dur_pred = np.cumsum(dur_pred).astype(int)
- for i in range(len(dur_pred)):
- shift = (i % 8) + 1
- plt.text(dur_pred[i], H + shift * 4, txt[i])
- plt.vlines(dur_pred[i], H, H * 1.5, colors='r') # red is pred
- plt.xlim(0, max(dur_gt[-1], dur_pred[-1]))
- if f0s is not None:
- ax = plt.gca()
- ax2 = ax.twinx()
- if not isinstance(f0s, dict):
- f0s = {'f0': f0s}
- for i, (k, f0) in enumerate(f0s.items()):
- if isinstance(f0, torch.Tensor):
- f0 = f0.cpu().numpy()
- ax2.plot(f0, label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.5)
- ax2.set_ylim(0, 1000)
- ax2.legend()
- return fig
diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/synthetic_util.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/synthetic_util.py
deleted file mode 100644
index c14d0223dc417e6b0bd220f65dc3db0291bb773c..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/synthetic_util.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Helper functions to generate data directly on devices."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import random
-import string
-
-from absl import logging
-import tensorflow as tf
-
-
-# The `SyntheticDataset` is a temporary solution for generating synthetic data
-# directly on devices. It is only useful for Keras with Distribution
-# Strategies. We will have better support in `tf.data` or Distribution Strategy
-# later.
-class SyntheticDataset(object):
- """A dataset that generates synthetic data on each device."""
-
- def __init__(self, dataset, split_by=1):
- # dataset.take(1) doesn't have GPU kernel.
- with tf.device('device:CPU:0'):
- tensor = tf.data.experimental.get_single_element(dataset.take(1))
- flat_tensor = tf.nest.flatten(tensor)
- variable_data = []
- initializers = []
- for t in flat_tensor:
- rebatched_t = tf.split(t, num_or_size_splits=split_by, axis=0)[0]
- assert rebatched_t.shape.is_fully_defined(), rebatched_t.shape
- v = tf.compat.v1.get_local_variable(self._random_name(),
- initializer=rebatched_t)
- variable_data.append(v)
- initializers.append(v.initializer)
- input_data = tf.nest.pack_sequence_as(tensor, variable_data)
- self._iterator = SyntheticIterator(input_data, initializers)
-
- def _random_name(self, size=10, chars=string.ascii_uppercase + string.digits):
- return ''.join(random.choice(chars) for _ in range(size))
-
- def __iter__(self):
- return self._iterator
-
- def make_one_shot_iterator(self):
- return self._iterator
-
- def make_initializable_iterator(self):
- return self._iterator
-
-
-class SyntheticIterator(object):
- """A dataset that generates synthetic data on each device."""
-
- def __init__(self, input_data, initializers):
- self._input_data = input_data
- self._initializers = initializers
-
- def get_next(self):
- return self._input_data
-
- def next(self):
- return self.__next__()
-
- def __next__(self):
- try:
- return self.get_next()
- except tf.errors.OutOfRangeError:
- raise StopIteration
-
- def initialize(self):
- if tf.executing_eagerly():
- return tf.no_op()
- else:
- return self._initializers
-
-
-def _monkey_patch_dataset_method(strategy):
- """Monkey-patch `strategy`'s `make_dataset_iterator` method."""
- def make_dataset(self, dataset):
- logging.info('Using pure synthetic data.')
- with self.scope():
- if self.extended._global_batch_size: # pylint: disable=protected-access
- return SyntheticDataset(dataset, self.num_replicas_in_sync)
- else:
- return SyntheticDataset(dataset)
-
- def make_iterator(self, dataset):
- dist_dataset = make_dataset(self, dataset)
- return iter(dist_dataset)
-
- strategy.orig_make_dataset_iterator = strategy.make_dataset_iterator
- strategy.make_dataset_iterator = make_iterator
- strategy.orig_distribute_dataset = strategy.experimental_distribute_dataset
- strategy.experimental_distribute_dataset = make_dataset
-
-
-def _undo_monkey_patch_dataset_method(strategy):
- if hasattr(strategy, 'orig_make_dataset_iterator'):
- strategy.make_dataset_iterator = strategy.orig_make_dataset_iterator
- if hasattr(strategy, 'orig_distribute_dataset'):
- strategy.make_dataset_iterator = strategy.orig_distribute_dataset
-
-
-def set_up_synthetic_data():
- _monkey_patch_dataset_method(tf.distribute.OneDeviceStrategy)
- _monkey_patch_dataset_method(tf.distribute.MirroredStrategy)
- _monkey_patch_dataset_method(
- tf.distribute.experimental.MultiWorkerMirroredStrategy)
-
-
-def undo_set_up_synthetic_data():
- _undo_monkey_patch_dataset_method(tf.distribute.OneDeviceStrategy)
- _undo_monkey_patch_dataset_method(tf.distribute.MirroredStrategy)
- _undo_monkey_patch_dataset_method(
- tf.distribute.experimental.MultiWorkerMirroredStrategy)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/token_classification.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/token_classification.py
deleted file mode 100644
index ff6163481e6f267a5aefac352ff38447a275a13a..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/networks/token_classification.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Classification network."""
-# pylint: disable=g-classes-have-attributes
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-import tensorflow as tf
-
-
-@tf.keras.utils.register_keras_serializable(package='Text')
-class TokenClassification(tf.keras.Model):
- """TokenClassification network head for BERT modeling.
-
- This network implements a simple token classifier head based on a dense layer.
-
- Arguments:
- input_width: The innermost dimension of the input tensor to this network.
- num_classes: The number of classes that this network should classify to.
- activation: The activation, if any, for the dense layer in this network.
- initializer: The intializer for the dense layer in this network. Defaults to
- a Glorot uniform initializer.
- output: The output style for this network. Can be either 'logits' or
- 'predictions'.
- """
-
- def __init__(self,
- input_width,
- num_classes,
- initializer='glorot_uniform',
- output='logits',
- **kwargs):
- self._self_setattr_tracking = False
- self._config_dict = {
- 'input_width': input_width,
- 'num_classes': num_classes,
- 'initializer': initializer,
- 'output': output,
- }
-
- sequence_data = tf.keras.layers.Input(
- shape=(None, input_width), name='sequence_data', dtype=tf.float32)
-
- self.logits = tf.keras.layers.Dense(
- num_classes,
- activation=None,
- kernel_initializer=initializer,
- name='predictions/transform/logits')(
- sequence_data)
- predictions = tf.keras.layers.Activation(tf.nn.log_softmax)(self.logits)
-
- if output == 'logits':
- output_tensors = self.logits
- elif output == 'predictions':
- output_tensors = predictions
- else:
- raise ValueError(
- ('Unknown `output` value "%s". `output` can be either "logits" or '
- '"predictions"') % output)
-
- super(TokenClassification, self).__init__(
- inputs=[sequence_data], outputs=output_tensors, **kwargs)
-
- def get_config(self):
- return self._config_dict
-
- @classmethod
- def from_config(cls, config, custom_objects=None):
- return cls(**config)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_test.py
deleted file mode 100644
index 227b43dc6ff194ab74effc37214ae9253823310d..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_test.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Test Transformer model."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import tensorflow as tf
-
-from official.nlp.transformer import model_params
-from official.nlp.transformer import transformer
-
-
-class TransformerV2Test(tf.test.TestCase):
-
- def setUp(self):
- self.params = params = model_params.TINY_PARAMS
- params["batch_size"] = params["default_batch_size"] = 16
- params["use_synthetic_data"] = True
- params["hidden_size"] = 12
- params["num_hidden_layers"] = 2
- params["filter_size"] = 14
- params["num_heads"] = 2
- params["vocab_size"] = 41
- params["extra_decode_length"] = 2
- params["beam_size"] = 3
- params["dtype"] = tf.float32
-
- def test_create_model_train(self):
- model = transformer.create_model(self.params, True)
- inputs, outputs = model.inputs, model.outputs
- self.assertEqual(len(inputs), 2)
- self.assertEqual(len(outputs), 1)
- self.assertEqual(inputs[0].shape.as_list(), [None, None])
- self.assertEqual(inputs[0].dtype, tf.int64)
- self.assertEqual(inputs[1].shape.as_list(), [None, None])
- self.assertEqual(inputs[1].dtype, tf.int64)
- self.assertEqual(outputs[0].shape.as_list(), [None, None, 41])
- self.assertEqual(outputs[0].dtype, tf.float32)
-
- def test_create_model_not_train(self):
- model = transformer.create_model(self.params, False)
- inputs, outputs = model.inputs, model.outputs
- self.assertEqual(len(inputs), 1)
- self.assertEqual(len(outputs), 2)
- self.assertEqual(inputs[0].shape.as_list(), [None, None])
- self.assertEqual(inputs[0].dtype, tf.int64)
- self.assertEqual(outputs[0].shape.as_list(), [None, None])
- self.assertEqual(outputs[0].dtype, tf.int32)
- self.assertEqual(outputs[1].shape.as_list(), [None])
- self.assertEqual(outputs[1].dtype, tf.float32)
-
-
-if __name__ == "__main__":
- tf.test.main()
diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_device.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_device.py
deleted file mode 100644
index d8974fc48d1fc77d227745191579df16b2e46bcc..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/_device.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Flags for managing compute devices. Currently only contains TPU flags."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from absl import flags
-from absl import logging
-
-from official.utils.flags._conventions import help_wrap
-
-
-def require_cloud_storage(flag_names):
- """Register a validator to check directory flags.
- Args:
- flag_names: An iterable of strings containing the names of flags to be
- checked.
- """
- msg = "TPU requires GCS path for {}".format(", ".join(flag_names))
- @flags.multi_flags_validator(["tpu"] + flag_names, message=msg)
- def _path_check(flag_values): # pylint: disable=missing-docstring
- if flag_values["tpu"] is None:
- return True
-
- valid_flags = True
- for key in flag_names:
- if not flag_values[key].startswith("gs://"):
- logging.error("%s must be a GCS path.", key)
- valid_flags = False
-
- return valid_flags
-
-
-def define_device(tpu=True):
- """Register device specific flags.
- Args:
- tpu: Create flags to specify TPU operation.
- Returns:
- A list of flags for core.py to marks as key flags.
- """
-
- key_flags = []
-
- if tpu:
- flags.DEFINE_string(
- name="tpu", default=None,
- help=help_wrap(
- "The Cloud TPU to use for training. This should be either the name "
- "used when creating the Cloud TPU, or a "
- "grpc://ip.address.of.tpu:8470 url. Passing `local` will use the"
- "CPU of the local instance instead. (Good for debugging.)"))
- key_flags.append("tpu")
-
- flags.DEFINE_string(
- name="tpu_zone", default=None,
- help=help_wrap(
- "[Optional] GCE zone where the Cloud TPU is located in. If not "
- "specified, we will attempt to automatically detect the GCE "
- "project from metadata."))
-
- flags.DEFINE_string(
- name="tpu_gcp_project", default=None,
- help=help_wrap(
- "[Optional] Project name for the Cloud TPU-enabled project. If not "
- "specified, we will attempt to automatically detect the GCE "
- "project from metadata."))
-
- flags.DEFINE_integer(name="num_tpu_shards", default=8,
- help=help_wrap("Number of shards (TPU chips)."))
-
- return key_flags
diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_agent.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_agent.py
deleted file mode 100644
index 13fc7da2dc89a1fbcc7fa5efbbce87008580aa92..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_agent.py
+++ /dev/null
@@ -1,1297 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-"""Language model agent.
-
-Agent outputs code in a sequence just like a language model. Can be trained
-as a language model or using RL, or a combination of the two.
-"""
-
-from collections import namedtuple
-from math import exp
-from math import log
-import time
-
-from absl import logging
-import numpy as np
-from six.moves import xrange
-import tensorflow as tf
-
-from common import rollout as rollout_lib # brain coder
-from common import utils # brain coder
-from single_task import misc # brain coder
-
-
-# Experiments in the ICLR 2018 paper used reduce_sum instead of reduce_mean for
-# some losses. We make all loses be batch_size independent, and multiply the
-# changed losses by 64, which was the fixed batch_size when the experiments
-# where run. The loss hyperparameters still match what is reported in the paper.
-MAGIC_LOSS_MULTIPLIER = 64
-
-
-def rshift_time(tensor_2d, fill=misc.BF_EOS_INT):
- """Right shifts a 2D tensor along the time dimension (axis-1)."""
- dim_0 = tf.shape(tensor_2d)[0]
- fill_tensor = tf.fill([dim_0, 1], fill)
- return tf.concat([fill_tensor, tensor_2d[:, :-1]], axis=1)
-
-
-def join(a, b):
- # Concat a and b along 0-th dim.
- if a is None or len(a) == 0: # pylint: disable=g-explicit-length-test
- return b
- if b is None or len(b) == 0: # pylint: disable=g-explicit-length-test
- return a
- return np.concatenate((a, b))
-
-
-def make_optimizer(kind, lr):
- if kind == 'sgd':
- return tf.train.GradientDescentOptimizer(lr)
- elif kind == 'adam':
- return tf.train.AdamOptimizer(lr)
- elif kind == 'rmsprop':
- return tf.train.RMSPropOptimizer(learning_rate=lr, decay=0.99)
- else:
- raise ValueError('Optimizer type "%s" not recognized.' % kind)
-
-
-class LinearWrapper(tf.contrib.rnn.RNNCell):
- """RNNCell wrapper that adds a linear layer to the output."""
-
- def __init__(self, cell, output_size, dtype=tf.float32, suppress_index=None):
- self.cell = cell
- self._output_size = output_size
- self._dtype = dtype
- self._suppress_index = suppress_index
- self.smallest_float = -2.4e38
-
- def __call__(self, inputs, state, scope=None):
- with tf.variable_scope(type(self).__name__):
- outputs, state = self.cell(inputs, state, scope=scope)
- logits = tf.matmul(
- outputs,
- tf.get_variable('w_output',
- [self.cell.output_size, self.output_size],
- dtype=self._dtype))
- if self._suppress_index is not None:
- # Replace the target index with -inf, so that it never gets selected.
- batch_size = tf.shape(logits)[0]
- logits = tf.concat(
- [logits[:, :self._suppress_index],
- tf.fill([batch_size, 1], self.smallest_float),
- logits[:, self._suppress_index + 1:]],
- axis=1)
-
- return logits, state
-
- @property
- def output_size(self):
- return self._output_size
-
- @property
- def state_size(self):
- return self.cell.state_size
-
- def zero_state(self, batch_size, dtype):
- return self.cell.zero_state(batch_size, dtype)
-
-
-UpdateStepResult = namedtuple(
- 'UpdateStepResult',
- ['global_step', 'global_npe', 'summaries_list', 'gradients_dict'])
-
-
-class AttrDict(dict):
- """Dict with attributes as keys.
-
- https://stackoverflow.com/a/14620633
- """
-
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-class LMAgent(object):
- """Language model agent."""
- action_space = misc.bf_num_tokens()
- observation_space = misc.bf_num_tokens()
-
- def __init__(self, global_config, task_id=0,
- logging_file=None,
- experience_replay_file=None,
- global_best_reward_fn=None,
- found_solution_op=None,
- assign_code_solution_fn=None,
- program_count=None,
- do_iw_summaries=False,
- stop_on_success=True,
- dtype=tf.float32,
- verbose_level=0,
- is_local=True):
- self.config = config = global_config.agent
- self.logging_file = logging_file
- self.experience_replay_file = experience_replay_file
- self.task_id = task_id
- self.verbose_level = verbose_level
- self.global_best_reward_fn = global_best_reward_fn
- self.found_solution_op = found_solution_op
- self.assign_code_solution_fn = assign_code_solution_fn
- self.parent_scope_name = tf.get_variable_scope().name
- self.dtype = dtype
- self.allow_eos_token = config.eos_token
- self.stop_on_success = stop_on_success
- self.pi_loss_hparam = config.pi_loss_hparam
- self.vf_loss_hparam = config.vf_loss_hparam
- self.is_local = is_local
-
- self.top_reward = 0.0
- self.embeddings_trainable = True
-
- self.no_op = tf.no_op()
-
- self.learning_rate = tf.constant(
- config.lr, dtype=dtype, name='learning_rate')
- self.initializer = tf.contrib.layers.variance_scaling_initializer(
- factor=config.param_init_factor,
- mode='FAN_AVG',
- uniform=True,
- dtype=dtype) # TF's default initializer.
- tf.get_variable_scope().set_initializer(self.initializer)
-
- self.a2c = config.ema_baseline_decay == 0
- if not self.a2c:
- logging.info('Using exponential moving average REINFORCE baselines.')
- self.ema_baseline_decay = config.ema_baseline_decay
- self.ema_by_len = [0.0] * global_config.timestep_limit
- else:
- logging.info('Using advantage (a2c) with learned value function.')
- self.ema_baseline_decay = 0.0
- self.ema_by_len = None
-
- # Top-k
- if config.topk and config.topk_loss_hparam:
- self.topk_loss_hparam = config.topk_loss_hparam
- self.topk_batch_size = config.topk_batch_size
- if self.topk_batch_size <= 0:
- raise ValueError('topk_batch_size must be a positive integer. Got %s',
- self.topk_batch_size)
- self.top_episodes = utils.MaxUniquePriorityQueue(config.topk)
- logging.info('Made max-priorty-queue with capacity %d',
- self.top_episodes.capacity)
- else:
- self.top_episodes = None
- self.topk_loss_hparam = 0.0
- logging.info('No max-priorty-queue')
-
- # Experience replay.
- self.replay_temperature = config.replay_temperature
- self.num_replay_per_batch = int(global_config.batch_size * config.alpha)
- self.num_on_policy_per_batch = (
- global_config.batch_size - self.num_replay_per_batch)
- self.replay_alpha = (
- self.num_replay_per_batch / float(global_config.batch_size))
- logging.info('num_replay_per_batch: %d', self.num_replay_per_batch)
- logging.info('num_on_policy_per_batch: %d', self.num_on_policy_per_batch)
- logging.info('replay_alpha: %s', self.replay_alpha)
- if self.num_replay_per_batch > 0:
- # Train with off-policy episodes from replay buffer.
- start_time = time.time()
- self.experience_replay = utils.RouletteWheel(
- unique_mode=True, save_file=experience_replay_file)
- logging.info('Took %s sec to load replay buffer from disk.',
- int(time.time() - start_time))
- logging.info('Replay buffer file location: "%s"',
- self.experience_replay.save_file)
- else:
- # Only train on-policy.
- self.experience_replay = None
-
- if program_count is not None:
- self.program_count = program_count
- self.program_count_add_ph = tf.placeholder(
- tf.int64, [], 'program_count_add_ph')
- self.program_count_add_op = self.program_count.assign_add(
- self.program_count_add_ph)
-
- ################################
- # RL policy and value networks #
- ################################
- batch_size = global_config.batch_size
- logging.info('batch_size: %d', batch_size)
-
- self.policy_cell = LinearWrapper(
- tf.contrib.rnn.MultiRNNCell(
- [tf.contrib.rnn.BasicLSTMCell(cell_size)
- for cell_size in config.policy_lstm_sizes]),
- self.action_space,
- dtype=dtype,
- suppress_index=None if self.allow_eos_token else misc.BF_EOS_INT)
- self.value_cell = LinearWrapper(
- tf.contrib.rnn.MultiRNNCell(
- [tf.contrib.rnn.BasicLSTMCell(cell_size)
- for cell_size in config.value_lstm_sizes]),
- 1,
- dtype=dtype)
-
- obs_embedding_scope = 'obs_embed'
- with tf.variable_scope(
- obs_embedding_scope,
- initializer=tf.random_uniform_initializer(minval=-1.0, maxval=1.0)):
- obs_embeddings = tf.get_variable(
- 'embeddings',
- [self.observation_space, config.obs_embedding_size],
- dtype=dtype, trainable=self.embeddings_trainable)
- self.obs_embeddings = obs_embeddings
-
- ################################
- # RL policy and value networks #
- ################################
-
- initial_state = tf.fill([batch_size], misc.BF_EOS_INT)
- def loop_fn(loop_time, cell_output, cell_state, loop_state):
- """Function called by tf.nn.raw_rnn to instantiate body of the while_loop.
-
- See https://www.tensorflow.org/api_docs/python/tf/nn/raw_rnn for more
- information.
-
- When time is 0, and cell_output, cell_state, loop_state are all None,
- `loop_fn` will create the initial input, internal cell state, and loop
- state. When time > 0, `loop_fn` will operate on previous cell output,
- state, and loop state.
-
- Args:
- loop_time: A scalar tensor holding the current timestep (zero based
- counting).
- cell_output: Output of the raw_rnn cell at the current timestep.
- cell_state: Cell internal state at the current timestep.
- loop_state: Additional loop state. These tensors were returned by the
- previous call to `loop_fn`.
-
- Returns:
- elements_finished: Bool tensor of shape [batch_size] which marks each
- sequence in the batch as being finished or not finished.
- next_input: A tensor containing input to be fed into the cell at the
- next timestep.
- next_cell_state: Cell internal state to be fed into the cell at the
- next timestep.
- emit_output: Tensor to be added to the TensorArray returned by raw_rnn
- as output from the while_loop.
- next_loop_state: Additional loop state. These tensors will be fed back
- into the next call to `loop_fn` as `loop_state`.
- """
- if cell_output is None: # 0th time step.
- next_cell_state = self.policy_cell.zero_state(batch_size, dtype)
- elements_finished = tf.zeros([batch_size], tf.bool)
- output_lengths = tf.ones([batch_size], dtype=tf.int32)
- next_input = tf.gather(obs_embeddings, initial_state)
- emit_output = None
- next_loop_state = (
- tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True),
- output_lengths,
- elements_finished
- )
- else:
- scaled_logits = cell_output * config.softmax_tr # Scale temperature.
- prev_chosen, prev_output_lengths, prev_elements_finished = loop_state
- next_cell_state = cell_state
- chosen_outputs = tf.to_int32(tf.where(
- tf.logical_not(prev_elements_finished),
- tf.multinomial(logits=scaled_logits, num_samples=1)[:, 0],
- tf.zeros([batch_size], dtype=tf.int64)))
- elements_finished = tf.logical_or(
- tf.equal(chosen_outputs, misc.BF_EOS_INT),
- loop_time >= global_config.timestep_limit)
- output_lengths = tf.where(
- elements_finished,
- prev_output_lengths,
- # length includes EOS token. empty seq has len 1.
- tf.tile(tf.expand_dims(loop_time + 1, 0), [batch_size])
- )
- next_input = tf.gather(obs_embeddings, chosen_outputs)
- emit_output = scaled_logits
- next_loop_state = (prev_chosen.write(loop_time - 1, chosen_outputs),
- output_lengths,
- tf.logical_or(prev_elements_finished,
- elements_finished))
- return (elements_finished, next_input, next_cell_state, emit_output,
- next_loop_state)
-
- with tf.variable_scope('policy'):
- (decoder_outputs_ta,
- _, # decoder_state
- (sampled_output_ta, output_lengths, _)) = tf.nn.raw_rnn(
- cell=self.policy_cell,
- loop_fn=loop_fn)
- policy_logits = tf.transpose(decoder_outputs_ta.stack(), (1, 0, 2),
- name='policy_logits')
- sampled_tokens = tf.transpose(sampled_output_ta.stack(), (1, 0),
- name='sampled_tokens')
- # Add SOS to beginning of the sequence.
- rshift_sampled_tokens = rshift_time(sampled_tokens, fill=misc.BF_EOS_INT)
-
- # Initial state is 0, 2nd state is first token.
- # Note: If value of last state is computed, this will be used as bootstrap.
- if self.a2c:
- with tf.variable_scope('value'):
- value_output, _ = tf.nn.dynamic_rnn(
- self.value_cell,
- tf.gather(obs_embeddings, rshift_sampled_tokens),
- sequence_length=output_lengths,
- dtype=dtype)
- value = tf.squeeze(value_output, axis=[2])
- else:
- value = tf.zeros([], dtype=dtype)
-
- # for sampling actions from the agent, and which told tensors for doing
- # gradient updates on the agent.
- self.sampled_batch = AttrDict(
- logits=policy_logits,
- value=value,
- tokens=sampled_tokens,
- episode_lengths=output_lengths,
- probs=tf.nn.softmax(policy_logits),
- log_probs=tf.nn.log_softmax(policy_logits))
-
- # adjusted_lengths can be less than the full length of each episode.
- # Use this to train on only part of an episode (starting from t=0).
- self.adjusted_lengths = tf.placeholder(
- tf.int32, [None], name='adjusted_lengths')
- self.policy_multipliers = tf.placeholder(
- dtype,
- [None, None],
- name='policy_multipliers')
- # Empirical value, i.e. discounted sum of observed future rewards from each
- # time step in the episode.
- self.empirical_values = tf.placeholder(
- dtype,
- [None, None],
- name='empirical_values')
-
- # Off-policy training. Just add supervised loss to the RL loss.
- self.off_policy_targets = tf.placeholder(
- tf.int32,
- [None, None],
- name='off_policy_targets')
- self.off_policy_target_lengths = tf.placeholder(
- tf.int32, [None], name='off_policy_target_lengths')
-
- self.actions = tf.placeholder(tf.int32, [None, None], name='actions')
- # Add SOS to beginning of the sequence.
- inputs = rshift_time(self.actions, fill=misc.BF_EOS_INT)
- with tf.variable_scope('policy', reuse=True):
- logits, _ = tf.nn.dynamic_rnn(
- self.policy_cell, tf.gather(obs_embeddings, inputs),
- sequence_length=self.adjusted_lengths,
- dtype=dtype)
-
- if self.a2c:
- with tf.variable_scope('value', reuse=True):
- value_output, _ = tf.nn.dynamic_rnn(
- self.value_cell,
- tf.gather(obs_embeddings, inputs),
- sequence_length=self.adjusted_lengths,
- dtype=dtype)
- value2 = tf.squeeze(value_output, axis=[2])
- else:
- value2 = tf.zeros([], dtype=dtype)
-
- self.given_batch = AttrDict(
- logits=logits,
- value=value2,
- tokens=sampled_tokens,
- episode_lengths=self.adjusted_lengths,
- probs=tf.nn.softmax(logits),
- log_probs=tf.nn.log_softmax(logits))
-
- # Episode masks.
- max_episode_length = tf.shape(self.actions)[1]
- # range_row shape: [1, max_episode_length]
- range_row = tf.expand_dims(tf.range(max_episode_length), 0)
- episode_masks = tf.cast(
- tf.less(range_row, tf.expand_dims(self.given_batch.episode_lengths, 1)),
- dtype=dtype)
- episode_masks_3d = tf.expand_dims(episode_masks, 2)
-
- # Length adjusted episodes.
- self.a_probs = a_probs = self.given_batch.probs * episode_masks_3d
- self.a_log_probs = a_log_probs = (
- self.given_batch.log_probs * episode_masks_3d)
- self.a_value = a_value = self.given_batch.value * episode_masks
- self.a_policy_multipliers = a_policy_multipliers = (
- self.policy_multipliers * episode_masks)
- if self.a2c:
- self.a_empirical_values = a_empirical_values = (
- self.empirical_values * episode_masks)
-
- # pi_loss is scalar
- acs_onehot = tf.one_hot(self.actions, self.action_space, dtype=dtype)
- self.acs_onehot = acs_onehot
- chosen_masked_log_probs = acs_onehot * a_log_probs
- pi_target = tf.expand_dims(a_policy_multipliers, -1)
- pi_loss_per_step = chosen_masked_log_probs * pi_target # Maximize.
- self.pi_loss = pi_loss = (
- -tf.reduce_mean(tf.reduce_sum(pi_loss_per_step, axis=[1, 2]), axis=0)
- * MAGIC_LOSS_MULTIPLIER) # Minimize.
- assert len(self.pi_loss.shape) == 0 # pylint: disable=g-explicit-length-test
-
- # shape: [batch_size, time]
- self.chosen_log_probs = tf.reduce_sum(chosen_masked_log_probs, axis=2)
- self.chosen_probs = tf.reduce_sum(acs_onehot * a_probs, axis=2)
-
- # loss of value function
- if self.a2c:
- vf_loss_per_step = tf.square(a_value - a_empirical_values)
- self.vf_loss = vf_loss = (
- tf.reduce_mean(tf.reduce_sum(vf_loss_per_step, axis=1), axis=0)
- * MAGIC_LOSS_MULTIPLIER) # Minimize.
- assert len(self.vf_loss.shape) == 0 # pylint: disable=g-explicit-length-test
- else:
- self.vf_loss = vf_loss = 0.0
-
- # Maximize entropy regularizer
- self.entropy = entropy = (
- -tf.reduce_mean(
- tf.reduce_sum(a_probs * a_log_probs, axis=[1, 2]), axis=0)
- * MAGIC_LOSS_MULTIPLIER) # Maximize
- self.negentropy = -entropy # Minimize negentropy.
- assert len(self.negentropy.shape) == 0 # pylint: disable=g-explicit-length-test
-
- # off-policy loss
- self.offp_switch = tf.placeholder(dtype, [], name='offp_switch')
- if self.top_episodes is not None:
- # Add SOS to beginning of the sequence.
- offp_inputs = tf.gather(obs_embeddings,
- rshift_time(self.off_policy_targets,
- fill=misc.BF_EOS_INT))
- with tf.variable_scope('policy', reuse=True):
- offp_logits, _ = tf.nn.dynamic_rnn(
- self.policy_cell, offp_inputs, self.off_policy_target_lengths,
- dtype=dtype) # shape: [batch_size, time, action_space]
- topk_loss_per_step = tf.nn.sparse_softmax_cross_entropy_with_logits(
- labels=self.off_policy_targets,
- logits=offp_logits,
- name='topk_loss_per_logit')
- # Take mean over batch dimension so that the loss multiplier strength is
- # independent of batch size. Sum over time dimension.
- topk_loss = tf.reduce_mean(
- tf.reduce_sum(topk_loss_per_step, axis=1), axis=0)
- assert len(topk_loss.shape) == 0 # pylint: disable=g-explicit-length-test
- self.topk_loss = topk_loss * self.offp_switch
- logging.info('Including off policy loss.')
- else:
- self.topk_loss = topk_loss = 0.0
-
- self.entropy_hparam = tf.constant(
- config.entropy_beta, dtype=dtype, name='entropy_beta')
-
- self.pi_loss_term = pi_loss * self.pi_loss_hparam
- self.vf_loss_term = vf_loss * self.vf_loss_hparam
- self.entropy_loss_term = self.negentropy * self.entropy_hparam
- self.topk_loss_term = self.topk_loss_hparam * topk_loss
- self.loss = (
- self.pi_loss_term
- + self.vf_loss_term
- + self.entropy_loss_term
- + self.topk_loss_term)
-
- params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
- tf.get_variable_scope().name)
- self.trainable_variables = params
- self.sync_variables = self.trainable_variables
- non_embedding_params = [p for p in params
- if obs_embedding_scope not in p.name]
- self.non_embedding_params = non_embedding_params
- self.params = params
-
- if config.regularizer:
- logging.info('Adding L2 regularizer with scale %.2f.',
- config.regularizer)
- self.regularizer = config.regularizer * sum(
- tf.nn.l2_loss(w) for w in non_embedding_params)
- self.loss += self.regularizer
- else:
- logging.info('Skipping regularizer.')
- self.regularizer = 0.0
-
- # Only build gradients graph for local model.
- if self.is_local:
- unclipped_grads = tf.gradients(self.loss, params)
- self.dense_unclipped_grads = [
- tf.convert_to_tensor(g) for g in unclipped_grads]
- self.grads, self.global_grad_norm = tf.clip_by_global_norm(
- unclipped_grads, config.grad_clip_threshold)
- self.gradients_dict = dict(zip(params, self.grads))
- self.optimizer = make_optimizer(config.optimizer, self.learning_rate)
- self.all_variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
- tf.get_variable_scope().name)
-
- self.do_iw_summaries = do_iw_summaries
- if self.do_iw_summaries:
- b = None
- self.log_iw_replay_ph = tf.placeholder(tf.float32, [b],
- 'log_iw_replay_ph')
- self.log_iw_policy_ph = tf.placeholder(tf.float32, [b],
- 'log_iw_policy_ph')
- self.log_prob_replay_ph = tf.placeholder(tf.float32, [b],
- 'log_prob_replay_ph')
- self.log_prob_policy_ph = tf.placeholder(tf.float32, [b],
- 'log_prob_policy_ph')
- self.log_norm_replay_weights_ph = tf.placeholder(
- tf.float32, [b], 'log_norm_replay_weights_ph')
- self.iw_summary_op = tf.summary.merge([
- tf.summary.histogram('is/log_iw_replay', self.log_iw_replay_ph),
- tf.summary.histogram('is/log_iw_policy', self.log_iw_policy_ph),
- tf.summary.histogram('is/log_prob_replay', self.log_prob_replay_ph),
- tf.summary.histogram('is/log_prob_policy', self.log_prob_policy_ph),
- tf.summary.histogram(
- 'is/log_norm_replay_weights', self.log_norm_replay_weights_ph),
- ])
-
- def make_summary_ops(self):
- """Construct summary ops for the model."""
- # size = number of timesteps across entire batch. Number normalized by size
- # will not be affected by the amount of padding at the ends of sequences
- # in the batch.
- size = tf.cast(
- tf.reduce_sum(self.given_batch.episode_lengths), dtype=self.dtype)
- offp_size = tf.cast(tf.reduce_sum(self.off_policy_target_lengths),
- dtype=self.dtype)
- scope_prefix = self.parent_scope_name
-
- def _remove_prefix(prefix, name):
- assert name.startswith(prefix)
- return name[len(prefix):]
-
- # RL summaries.
- self.rl_summary_op = tf.summary.merge(
- [tf.summary.scalar('model/policy_loss', self.pi_loss / size),
- tf.summary.scalar('model/value_loss', self.vf_loss / size),
- tf.summary.scalar('model/topk_loss', self.topk_loss / offp_size),
- tf.summary.scalar('model/entropy', self.entropy / size),
- tf.summary.scalar('model/loss', self.loss / size),
- tf.summary.scalar('model/grad_norm',
- tf.global_norm(self.grads)),
- tf.summary.scalar('model/unclipped_grad_norm', self.global_grad_norm),
- tf.summary.scalar('model/non_embedding_var_norm',
- tf.global_norm(self.non_embedding_params)),
- tf.summary.scalar('hparams/entropy_beta', self.entropy_hparam),
- tf.summary.scalar('hparams/topk_loss_hparam', self.topk_loss_hparam),
- tf.summary.scalar('hparams/learning_rate', self.learning_rate),
- tf.summary.scalar('model/trainable_var_norm',
- tf.global_norm(self.trainable_variables)),
- tf.summary.scalar('loss/loss', self.loss),
- tf.summary.scalar('loss/entropy', self.entropy_loss_term),
- tf.summary.scalar('loss/vf', self.vf_loss_term),
- tf.summary.scalar('loss/policy', self.pi_loss_term),
- tf.summary.scalar('loss/offp', self.topk_loss_term)] +
- [tf.summary.scalar(
- 'param_norms/' + _remove_prefix(scope_prefix + '/', p.name),
- tf.norm(p))
- for p in self.params] +
- [tf.summary.scalar(
- 'grad_norms/' + _remove_prefix(scope_prefix + '/', p.name),
- tf.norm(g))
- for p, g in zip(self.params, self.grads)] +
- [tf.summary.scalar(
- 'unclipped_grad_norms/' + _remove_prefix(scope_prefix + '/',
- p.name),
- tf.norm(g))
- for p, g in zip(self.params, self.dense_unclipped_grads)])
-
- self.text_summary_placeholder = tf.placeholder(tf.string, shape=[])
- self.rl_text_summary_op = tf.summary.text('rl',
- self.text_summary_placeholder)
-
- def _rl_text_summary(self, session, step, npe, tot_r, num_steps,
- input_case, code_output, code, reason):
- """Logs summary about a single episode and creates a text_summary for TB.
-
- Args:
- session: tf.Session instance.
- step: Global training step.
- npe: Number of programs executed so far.
- tot_r: Total reward.
- num_steps: Number of timesteps in the episode (i.e. code length).
- input_case: Inputs for test cases.
- code_output: Outputs produced by running the code on the inputs.
- code: String representation of the code.
- reason: Reason for the reward assigned by the task.
-
- Returns:
- Serialized text summary data for tensorboard.
- """
- if not input_case:
- input_case = ' '
- if not code_output:
- code_output = ' '
- if not code:
- code = ' '
- text = (
- 'Tot R: **%.2f**; Len: **%d**; Reason: **%s**\n\n'
- 'Input: **`%s`**; Output: **`%s`**\n\nCode: **`%s`**'
- % (tot_r, num_steps, reason, input_case, code_output, code))
- text_summary = session.run(self.rl_text_summary_op,
- {self.text_summary_placeholder: text})
- logging.info(
- 'Step %d.\t NPE: %d\t Reason: %s.\t Tot R: %.2f.\t Length: %d. '
- '\tInput: %s \tOutput: %s \tProgram: %s',
- step, npe, reason, tot_r, num_steps, input_case,
- code_output, code)
- return text_summary
-
- def _rl_reward_summary(self, total_rewards):
- """Create summary ops that report on episode rewards.
-
- Creates summaries for average, median, max, and min rewards in the batch.
-
- Args:
- total_rewards: Tensor of shape [batch_size] containing the total reward
- from each episode in the batch.
-
- Returns:
- tf.Summary op.
- """
- tr = np.asarray(total_rewards)
- reward_summary = tf.Summary(value=[
- tf.Summary.Value(
- tag='reward/avg',
- simple_value=np.mean(tr)),
- tf.Summary.Value(
- tag='reward/med',
- simple_value=np.median(tr)),
- tf.Summary.Value(
- tag='reward/max',
- simple_value=np.max(tr)),
- tf.Summary.Value(
- tag='reward/min',
- simple_value=np.min(tr))])
- return reward_summary
-
- def _iw_summary(self, session, replay_iw, replay_log_probs,
- norm_replay_weights, on_policy_iw,
- on_policy_log_probs):
- """Compute summaries for importance weights at a given batch.
-
- Args:
- session: tf.Session instance.
- replay_iw: Importance weights for episodes from replay buffer.
- replay_log_probs: Total log probabilities of the replay episodes under the
- current policy.
- norm_replay_weights: Normalized replay weights, i.e. values in `replay_iw`
- divided by the total weight in the entire replay buffer. Note, this is
- also the probability of selecting each episode from the replay buffer
- (in a roulette wheel replay buffer).
- on_policy_iw: Importance weights for episodes sampled from the current
- policy.
- on_policy_log_probs: Total log probabilities of the on-policy episodes
- under the current policy.
-
- Returns:
- Serialized TF summaries. Use a summary writer to write these summaries to
- disk.
- """
- return session.run(
- self.iw_summary_op,
- {self.log_iw_replay_ph: np.log(replay_iw),
- self.log_iw_policy_ph: np.log(on_policy_iw),
- self.log_norm_replay_weights_ph: np.log(norm_replay_weights),
- self.log_prob_replay_ph: replay_log_probs,
- self.log_prob_policy_ph: on_policy_log_probs})
-
- def _compute_iw(self, policy_log_probs, replay_weights):
- """Compute importance weights for a batch of episodes.
-
- Arguments are iterables of length batch_size.
-
- Args:
- policy_log_probs: Log probability of each episode under the current
- policy.
- replay_weights: Weight of each episode in the replay buffer. 0 for
- episodes not sampled from the replay buffer (i.e. sampled from the
- policy).
-
- Returns:
- Numpy array of shape [batch_size] containing the importance weight for
- each episode in the batch.
- """
- log_total_replay_weight = log(self.experience_replay.total_weight)
-
- # importance weight
- # = 1 / [(1 - a) + a * exp(log(replay_weight / total_weight / p))]
- # = 1 / ((1-a) + a*q/p)
- a = float(self.replay_alpha)
- a_com = 1.0 - a # compliment of a
- importance_weights = np.asarray(
- [1.0 / (a_com
- + a * exp((log(replay_weight) - log_total_replay_weight)
- - log_p))
- if replay_weight > 0 else 1.0 / a_com
- for log_p, replay_weight
- in zip(policy_log_probs, replay_weights)])
- return importance_weights
-
- def update_step(self, session, rl_batch, train_op, global_step_op,
- return_gradients=False):
- """Perform gradient update on the model.
-
- Args:
- session: tf.Session instance.
- rl_batch: RLBatch instance from data.py. Use DataManager to create a
- RLBatch for each call to update_step. RLBatch contains a batch of
- tasks.
- train_op: A TF op which will perform the gradient update. LMAgent does not
- own its training op, so that trainers can do distributed training
- and construct a specialized training op.
- global_step_op: A TF op which will return the current global step when
- run (should not increment it).
- return_gradients: If True, the gradients will be saved and returned from
- this method call. This is useful for testing.
-
- Returns:
- Results from the update step in a UpdateStepResult namedtuple, including
- global step, global NPE, serialized summaries, and optionally gradients.
- """
- assert self.is_local
-
- # Do update for REINFORCE or REINFORCE + replay buffer.
- if self.experience_replay is None:
- # Train with on-policy REINFORCE.
-
- # Sample new programs from the policy.
- num_programs_from_policy = rl_batch.batch_size
- (batch_actions,
- batch_values,
- episode_lengths) = session.run(
- [self.sampled_batch.tokens, self.sampled_batch.value,
- self.sampled_batch.episode_lengths])
- if episode_lengths.size == 0:
- # This should not happen.
- logging.warn(
- 'Shapes:\n'
- 'batch_actions.shape: %s\n'
- 'batch_values.shape: %s\n'
- 'episode_lengths.shape: %s\n',
- batch_actions.shape, batch_values.shape, episode_lengths.shape)
-
- # Compute rewards.
- code_scores = compute_rewards(
- rl_batch, batch_actions, episode_lengths)
- code_strings = code_scores.code_strings
- batch_tot_r = code_scores.total_rewards
- test_cases = code_scores.test_cases
- code_outputs = code_scores.code_outputs
- reasons = code_scores.reasons
-
- # Process on-policy samples.
- batch_targets, batch_returns = process_episodes(
- code_scores.batch_rewards, episode_lengths, a2c=self.a2c,
- baselines=self.ema_by_len,
- batch_values=batch_values)
- batch_policy_multipliers = batch_targets
- batch_emp_values = batch_returns if self.a2c else [[]]
- adjusted_lengths = episode_lengths
-
- if self.top_episodes:
- assert len(self.top_episodes) > 0 # pylint: disable=g-explicit-length-test
- off_policy_targets = [
- item for item, _
- in self.top_episodes.random_sample(self.topk_batch_size)]
- off_policy_target_lengths = [len(t) for t in off_policy_targets]
- off_policy_targets = utils.stack_pad(off_policy_targets, pad_axes=0,
- dtype=np.int32)
- offp_switch = 1
- else:
- off_policy_targets = [[0]]
- off_policy_target_lengths = [1]
- offp_switch = 0
-
- fetches = {
- 'global_step': global_step_op,
- 'program_count': self.program_count,
- 'summaries': self.rl_summary_op,
- 'train_op': train_op,
- 'gradients': self.gradients_dict if return_gradients else self.no_op}
- fetched = session.run(
- fetches,
- {self.actions: batch_actions,
- self.empirical_values: batch_emp_values,
- self.policy_multipliers: batch_policy_multipliers,
- self.adjusted_lengths: adjusted_lengths,
- self.off_policy_targets: off_policy_targets,
- self.off_policy_target_lengths: off_policy_target_lengths,
- self.offp_switch: offp_switch})
-
- combined_adjusted_lengths = adjusted_lengths
- combined_returns = batch_returns
- else:
- # Train with REINFORCE + off-policy replay buffer by using importance
- # sampling.
-
- # Sample new programs from the policy.
- # Note: batch size is constant. A full batch will be sampled, but not all
- # programs will be executed and added to the replay buffer. Those which
- # are not executed will be discarded and not counted.
- batch_actions, batch_values, episode_lengths, log_probs = session.run(
- [self.sampled_batch.tokens, self.sampled_batch.value,
- self.sampled_batch.episode_lengths, self.sampled_batch.log_probs])
- if episode_lengths.size == 0:
- # This should not happen.
- logging.warn(
- 'Shapes:\n'
- 'batch_actions.shape: %s\n'
- 'batch_values.shape: %s\n'
- 'episode_lengths.shape: %s\n',
- batch_actions.shape, batch_values.shape, episode_lengths.shape)
-
- # Sample from experince replay buffer
- empty_replay_buffer = (
- self.experience_replay.is_empty()
- if self.experience_replay is not None else True)
- num_programs_from_replay_buff = (
- self.num_replay_per_batch if not empty_replay_buffer else 0)
- num_programs_from_policy = (
- rl_batch.batch_size - num_programs_from_replay_buff)
- if (not empty_replay_buffer) and num_programs_from_replay_buff:
- result = self.experience_replay.sample_many(
- num_programs_from_replay_buff)
- experience_samples, replay_weights = zip(*result)
- (replay_actions,
- replay_rewards,
- _, # log probs
- replay_adjusted_lengths) = zip(*experience_samples)
-
- replay_batch_actions = utils.stack_pad(replay_actions, pad_axes=0,
- dtype=np.int32)
-
- # compute log probs for replay samples under current policy
- all_replay_log_probs, = session.run(
- [self.given_batch.log_probs],
- {self.actions: replay_batch_actions,
- self.adjusted_lengths: replay_adjusted_lengths})
- replay_log_probs = [
- np.choose(replay_actions[i], all_replay_log_probs[i, :l].T).sum()
- for i, l in enumerate(replay_adjusted_lengths)]
- else:
- # Replay buffer is empty. Do not sample from it.
- replay_actions = None
- replay_policy_multipliers = None
- replay_adjusted_lengths = None
- replay_log_probs = None
- replay_weights = None
- replay_returns = None
- on_policy_weights = [0] * num_programs_from_replay_buff
-
- assert not self.a2c # TODO(danabo): Support A2C with importance sampling.
-
- # Compute rewards.
- code_scores = compute_rewards(
- rl_batch, batch_actions, episode_lengths,
- batch_size=num_programs_from_policy)
- code_strings = code_scores.code_strings
- batch_tot_r = code_scores.total_rewards
- test_cases = code_scores.test_cases
- code_outputs = code_scores.code_outputs
- reasons = code_scores.reasons
-
- # Process on-policy samples.
- p = num_programs_from_policy
- batch_targets, batch_returns = process_episodes(
- code_scores.batch_rewards, episode_lengths[:p], a2c=False,
- baselines=self.ema_by_len)
- batch_policy_multipliers = batch_targets
- batch_emp_values = [[]]
- on_policy_returns = batch_returns
-
- # Process off-policy samples.
- if (not empty_replay_buffer) and num_programs_from_replay_buff:
- offp_batch_rewards = [
- [0.0] * (l - 1) + [r]
- for l, r in zip(replay_adjusted_lengths, replay_rewards)]
- assert len(offp_batch_rewards) == num_programs_from_replay_buff
- assert len(replay_adjusted_lengths) == num_programs_from_replay_buff
- replay_batch_targets, replay_returns = process_episodes(
- offp_batch_rewards, replay_adjusted_lengths, a2c=False,
- baselines=self.ema_by_len)
- # Convert 2D array back into ragged 2D list.
- replay_policy_multipliers = [
- replay_batch_targets[i, :l]
- for i, l
- in enumerate(
- replay_adjusted_lengths[:num_programs_from_replay_buff])]
-
- adjusted_lengths = episode_lengths[:num_programs_from_policy]
-
- if self.top_episodes:
- assert len(self.top_episodes) > 0 # pylint: disable=g-explicit-length-test
- off_policy_targets = [
- item for item, _
- in self.top_episodes.random_sample(self.topk_batch_size)]
- off_policy_target_lengths = [len(t) for t in off_policy_targets]
- off_policy_targets = utils.stack_pad(off_policy_targets, pad_axes=0,
- dtype=np.int32)
- offp_switch = 1
- else:
- off_policy_targets = [[0]]
- off_policy_target_lengths = [1]
- offp_switch = 0
-
- # On-policy episodes.
- if num_programs_from_policy:
- separate_actions = [
- batch_actions[i, :l]
- for i, l in enumerate(adjusted_lengths)]
- chosen_log_probs = [
- np.choose(separate_actions[i], log_probs[i, :l].T)
- for i, l in enumerate(adjusted_lengths)]
- new_experiences = [
- (separate_actions[i],
- batch_tot_r[i],
- chosen_log_probs[i].sum(), l)
- for i, l in enumerate(adjusted_lengths)]
- on_policy_policy_multipliers = [
- batch_policy_multipliers[i, :l]
- for i, l in enumerate(adjusted_lengths)]
- (on_policy_actions,
- _, # rewards
- on_policy_log_probs,
- on_policy_adjusted_lengths) = zip(*new_experiences)
- else:
- new_experiences = []
- on_policy_policy_multipliers = []
- on_policy_actions = []
- on_policy_log_probs = []
- on_policy_adjusted_lengths = []
-
- if (not empty_replay_buffer) and num_programs_from_replay_buff:
- # Look for new experiences in replay buffer. Assign weight if an episode
- # is in the buffer.
- on_policy_weights = [0] * num_programs_from_policy
- for i, cs in enumerate(code_strings):
- if self.experience_replay.has_key(cs):
- on_policy_weights[i] = self.experience_replay.get_weight(cs)
-
- # Randomly select on-policy or off policy episodes to train on.
- combined_actions = join(replay_actions, on_policy_actions)
- combined_policy_multipliers = join(
- replay_policy_multipliers, on_policy_policy_multipliers)
- combined_adjusted_lengths = join(
- replay_adjusted_lengths, on_policy_adjusted_lengths)
- combined_returns = join(replay_returns, on_policy_returns)
- combined_actions = utils.stack_pad(combined_actions, pad_axes=0)
- combined_policy_multipliers = utils.stack_pad(combined_policy_multipliers,
- pad_axes=0)
- # P
- combined_on_policy_log_probs = join(replay_log_probs, on_policy_log_probs)
- # Q
- # Assume weight is zero for all sequences sampled from the policy.
- combined_q_weights = join(replay_weights, on_policy_weights)
-
- # Importance adjustment. Naive formulation:
- # E_{x~p}[f(x)] ~= 1/N sum_{x~p}(f(x)) ~= 1/N sum_{x~q}(f(x) * p(x)/q(x)).
- # p(x) is the policy, and q(x) is the off-policy distribution, i.e. replay
- # buffer distribution. Importance weight w(x) = p(x) / q(x).
-
- # Instead of sampling from the replay buffer only, we sample from a
- # mixture distribution of the policy and replay buffer.
- # We are sampling from the mixture a*q(x) + (1-a)*p(x), where 0 <= a <= 1.
- # Thus the importance weight w(x) = p(x) / (a*q(x) + (1-a)*p(x))
- # = 1 / ((1-a) + a*q(x)/p(x)) where q(x) is 0 for x sampled from the
- # policy.
- # Note: a = self.replay_alpha
- if empty_replay_buffer:
- # The replay buffer is empty.
- # Do no gradient update this step. The replay buffer will have stuff in
- # it next time.
- combined_policy_multipliers *= 0
- elif not num_programs_from_replay_buff:
- combined_policy_multipliers = np.ones([len(combined_actions), 1],
- dtype=np.float32)
- else:
- # If a < 1 compute importance weights
- # importance weight
- # = 1 / [(1 - a) + a * exp(log(replay_weight / total_weight / p))]
- # = 1 / ((1-a) + a*q/p)
- importance_weights = self._compute_iw(combined_on_policy_log_probs,
- combined_q_weights)
- if self.config.iw_normalize:
- importance_weights *= (
- float(rl_batch.batch_size) / importance_weights.sum())
- combined_policy_multipliers *= importance_weights.reshape(-1, 1)
-
- # Train on replay batch, top-k MLE.
- assert self.program_count is not None
- fetches = {
- 'global_step': global_step_op,
- 'program_count': self.program_count,
- 'summaries': self.rl_summary_op,
- 'train_op': train_op,
- 'gradients': self.gradients_dict if return_gradients else self.no_op}
- fetched = session.run(
- fetches,
- {self.actions: combined_actions,
- self.empirical_values: [[]], # replay_emp_values,
- self.policy_multipliers: combined_policy_multipliers,
- self.adjusted_lengths: combined_adjusted_lengths,
- self.off_policy_targets: off_policy_targets,
- self.off_policy_target_lengths: off_policy_target_lengths,
- self.offp_switch: offp_switch})
-
- # Add to experience replay buffer.
- self.experience_replay.add_many(
- objs=new_experiences,
- weights=[exp(r / self.replay_temperature) for r in batch_tot_r],
- keys=code_strings)
-
- # Update program count.
- session.run(
- [self.program_count_add_op],
- {self.program_count_add_ph: num_programs_from_policy})
-
- # Update EMA baselines on the mini-batch which we just did traning on.
- if not self.a2c:
- for i in xrange(rl_batch.batch_size):
- episode_length = combined_adjusted_lengths[i]
- empirical_returns = combined_returns[i, :episode_length]
- for j in xrange(episode_length):
- # Update ema_baselines in place.
- self.ema_by_len[j] = (
- self.ema_baseline_decay * self.ema_by_len[j]
- + (1 - self.ema_baseline_decay) * empirical_returns[j])
-
- global_step = fetched['global_step']
- global_npe = fetched['program_count']
- core_summaries = fetched['summaries']
- summaries_list = [core_summaries]
-
- if num_programs_from_policy:
- s_i = 0
- text_summary = self._rl_text_summary(
- session,
- global_step,
- global_npe,
- batch_tot_r[s_i],
- episode_lengths[s_i], test_cases[s_i],
- code_outputs[s_i], code_strings[s_i], reasons[s_i])
- reward_summary = self._rl_reward_summary(batch_tot_r)
-
- is_best = False
- if self.global_best_reward_fn:
- # Save best reward.
- best_reward = np.max(batch_tot_r)
- is_best = self.global_best_reward_fn(session, best_reward)
-
- if self.found_solution_op is not None and 'correct' in reasons:
- session.run(self.found_solution_op)
-
- # Save program to disk for record keeping.
- if self.stop_on_success:
- solutions = [
- {'code': code_strings[i], 'reward': batch_tot_r[i],
- 'npe': global_npe}
- for i in xrange(len(reasons)) if reasons[i] == 'correct']
- elif is_best:
- solutions = [
- {'code': code_strings[np.argmax(batch_tot_r)],
- 'reward': np.max(batch_tot_r),
- 'npe': global_npe}]
- else:
- solutions = []
- if solutions:
- if self.assign_code_solution_fn:
- self.assign_code_solution_fn(session, solutions[0]['code'])
- with tf.gfile.FastGFile(self.logging_file, 'a') as writer:
- for solution_dict in solutions:
- writer.write(str(solution_dict) + '\n')
-
- max_i = np.argmax(batch_tot_r)
- max_tot_r = batch_tot_r[max_i]
- if max_tot_r >= self.top_reward:
- if max_tot_r >= self.top_reward:
- self.top_reward = max_tot_r
- logging.info('Top code: r=%.2f, \t%s', max_tot_r, code_strings[max_i])
- if self.top_episodes is not None:
- self.top_episodes.push(
- max_tot_r, tuple(batch_actions[max_i, :episode_lengths[max_i]]))
-
- summaries_list += [text_summary, reward_summary]
-
- if self.do_iw_summaries and not empty_replay_buffer:
- # prob of replay samples under replay buffer sampling.
- norm_replay_weights = [
- w / self.experience_replay.total_weight
- for w in replay_weights]
- replay_iw = self._compute_iw(replay_log_probs, replay_weights)
- on_policy_iw = self._compute_iw(on_policy_log_probs, on_policy_weights)
- summaries_list.append(
- self._iw_summary(
- session, replay_iw, replay_log_probs, norm_replay_weights,
- on_policy_iw, on_policy_log_probs))
-
- return UpdateStepResult(
- global_step=global_step,
- global_npe=global_npe,
- summaries_list=summaries_list,
- gradients_dict=fetched['gradients'])
-
-
-def io_to_text(io_case, io_type):
- if isinstance(io_case, misc.IOTuple):
- # If there are many strings, join them with ','.
- return ','.join([io_to_text(e, io_type) for e in io_case])
- if io_type == misc.IOType.string:
- # There is one string. Return it.
- return misc.tokens_to_text(io_case)
- if (io_type == misc.IOType.integer
- or io_type == misc.IOType.boolean):
- if len(io_case) == 1:
- return str(io_case[0])
- return str(io_case)
-
-
-CodeScoreInfo = namedtuple(
- 'CodeScoreInfo',
- ['code_strings', 'batch_rewards', 'total_rewards', 'test_cases',
- 'code_outputs', 'reasons'])
-
-
-def compute_rewards(rl_batch, batch_actions, episode_lengths, batch_size=None):
- """Compute rewards for each episode in the batch.
-
- Args:
- rl_batch: A data.RLBatch instance. This holds information about the task
- each episode is solving, and a reward function for each episode.
- batch_actions: Contains batch of episodes. Each sequence of actions will be
- converted into a BF program and then scored. A numpy array of shape
- [batch_size, max_sequence_length].
- episode_lengths: The sequence length of each episode in the batch. Iterable
- of length batch_size.
- batch_size: (optional) number of programs to score. Use this to limit the
- number of programs executed from this batch. For example, when doing
- importance sampling some of the on-policy episodes will be discarded
- and they should not be executed. `batch_size` can be less than or equal
- to the size of the input batch.
-
- Returns:
- CodeScoreInfo namedtuple instance. This holds not just the computed rewards,
- but additional information computed during code execution which can be used
- for debugging and monitoring. this includes: BF code strings, test cases
- the code was executed on, code outputs from those test cases, and reasons
- for success or failure.
- """
- code_strings = [
- ''.join([misc.bf_int2char(a) for a in action_sequence[:l]])
- for action_sequence, l in zip(batch_actions, episode_lengths)]
- if batch_size is None:
- batch_size = len(code_strings)
- else:
- assert batch_size <= len(code_strings)
- code_strings = code_strings[:batch_size]
-
- if isinstance(rl_batch.reward_fns, (list, tuple)):
- # reward_fns is a list of functions, same length as code_strings.
- assert len(rl_batch.reward_fns) >= batch_size
- r_fn_results = [
- rl_batch.reward_fns[i](code_strings[i]) for i in xrange(batch_size)]
- else:
- # reward_fns is allowed to be one function which processes a batch of code
- # strings. This is useful for efficiency and batch level computation.
- r_fn_results = rl_batch.reward_fns(code_strings)
-
- # Expecting that r_fn returns a list of rewards. Length of list equals
- # length of the code string (including EOS char).
-
- batch_rewards = [r.episode_rewards for r in r_fn_results]
- total_rewards = [sum(b) for b in batch_rewards]
- test_cases = [io_to_text(r.input_case, r.input_type) for r in r_fn_results]
- code_outputs = [io_to_text(r.code_output, r.output_type)
- for r in r_fn_results]
- reasons = [r.reason for r in r_fn_results]
- return CodeScoreInfo(
- code_strings=code_strings,
- batch_rewards=batch_rewards,
- total_rewards=total_rewards,
- test_cases=test_cases,
- code_outputs=code_outputs,
- reasons=reasons)
-
-
-def process_episodes(
- batch_rewards, episode_lengths, a2c=False, baselines=None,
- batch_values=None):
- """Compute REINFORCE targets.
-
- REINFORCE here takes the form:
- grad_t = grad[log(pi(a_t|c_t))*target_t]
- where c_t is context: i.e. RNN state or environment state (or both).
-
- Two types of targets are supported:
- 1) Advantage actor critic (a2c).
- 2) Vanilla REINFORCE with baseline.
-
- Args:
- batch_rewards: Rewards received in each episode in the batch. A numpy array
- of shape [batch_size, max_sequence_length]. Note, these are per-timestep
- rewards, not total reward.
- episode_lengths: Length of each episode. An iterable of length batch_size.
- a2c: A bool. Whether to compute a2c targets (True) or vanilla targets
- (False).
- baselines: If a2c is False, provide baselines for each timestep. This is a
- list (or indexable container) of length max_time. Note: baselines are
- shared across all episodes, which is why there is no batch dimension.
- It is up to the caller to update baselines accordingly.
- batch_values: If a2c is True, provide values computed by a value estimator.
- A numpy array of shape [batch_size, max_sequence_length].
-
- Returns:
- batch_targets: REINFORCE targets for each episode and timestep. A numpy
- array of shape [batch_size, max_sequence_length].
- batch_returns: Returns computed for each episode and timestep. This is for
- reference, and is not used in the REINFORCE gradient update (but was
- used to compute the targets). A numpy array of shape
- [batch_size, max_sequence_length].
- """
- num_programs = len(batch_rewards)
- assert num_programs <= len(episode_lengths)
- batch_returns = [None] * num_programs
- batch_targets = [None] * num_programs
- for i in xrange(num_programs):
- episode_length = episode_lengths[i]
- assert len(batch_rewards[i]) == episode_length
- # Compute target for each timestep.
- # If we are computing A2C:
- # target_t = advantage_t = R_t - V(c_t)
- # where V(c_t) is a learned value function (provided as `values`).
- # Otherwise:
- # target_t = R_t - baselines[t]
- # where `baselines` are provided.
- # In practice we use a more generalized formulation of advantage. See docs
- # for `discounted_advantage_and_rewards`.
- if a2c:
- # Compute advantage.
- assert batch_values is not None
- episode_values = batch_values[i, :episode_length]
- episode_rewards = batch_rewards[i]
- emp_val, gen_adv = rollout_lib.discounted_advantage_and_rewards(
- episode_rewards, episode_values, gamma=1.0, lambda_=1.0)
- batch_returns[i] = emp_val
- batch_targets[i] = gen_adv
- else:
- # Compute return for each timestep. See section 3 of
- # https://arxiv.org/pdf/1602.01783.pdf
- assert baselines is not None
- empirical_returns = rollout_lib.discount(batch_rewards[i], gamma=1.0)
- targets = [None] * episode_length
- for j in xrange(episode_length):
- targets[j] = empirical_returns[j] - baselines[j]
- batch_returns[i] = empirical_returns
- batch_targets[i] = targets
- batch_returns = utils.stack_pad(batch_returns, 0)
- if num_programs:
- batch_targets = utils.stack_pad(batch_targets, 0)
- else:
- batch_targets = np.array([], dtype=np.float32)
-
- return (batch_targets, batch_returns)
diff --git a/spaces/NSect/multitrack-midi-music-generator/Dockerfile b/spaces/NSect/multitrack-midi-music-generator/Dockerfile
deleted file mode 100644
index 3b72aae1806d72a1fbbeeeb2b78683b344ab3a1c..0000000000000000000000000000000000000000
--- a/spaces/NSect/multitrack-midi-music-generator/Dockerfile
+++ /dev/null
@@ -1,50 +0,0 @@
-FROM ubuntu:20.04
-
-WORKDIR /code
-
-ENV SYSTEM=spaces
-ENV SPACE_ID=juancopi81/multitrack-midi-music-generator
-
-COPY ./requirements.txt /code/requirements.txt
-
-# Preconfigure tzdata
-RUN DEBIAN_FRONTEND="noninteractive" apt-get -qq update && \
- DEBIAN_FRONTEND="noninteractive" apt-get install -y tzdata
-
-RUN apt-get update -qq && \
- apt-get install -qq python3-pip build-essential libasound2-dev libjack-dev wget cmake pkg-config libglib2.0-dev ffmpeg
-
-# Download libfluidsynth source
-RUN wget https://github.com/FluidSynth/fluidsynth/archive/refs/tags/v2.3.3.tar.gz && \
- tar xzf v2.3.3.tar.gz && \
- cd fluidsynth-2.3.3 && \
- mkdir build && \
- cd build && \
- cmake .. && \
- make && \
- make install && \
- cd ../../ && \
- rm -rf fluidsynth-2.3.3 v2.3.3.tar.gz
-
-ENV LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}
-RUN ldconfig
-
-RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -m -u 1000 user
-
-# Switch to the "user" user
-USER user
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-CMD ["python3", "main.py"]
diff --git a/spaces/NbAiLab/maken-clip-sketch/app.py b/spaces/NbAiLab/maken-clip-sketch/app.py
deleted file mode 100644
index e9101b17e4838ce772ebab28c841034a23c3cf26..0000000000000000000000000000000000000000
--- a/spaces/NbAiLab/maken-clip-sketch/app.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import os
-
-from pathlib import Path
-import pandas as pd, numpy as np
-from transformers import CLIPProcessor, CLIPTextModel, CLIPModel
-import torch
-from torch import nn
-import gradio as gr
-import requests
-from PIL import Image, ImageFile
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-
-
-LABELS = Path('class_names.txt').read_text().splitlines()
-class_model = nn.Sequential(
- nn.Conv2d(1, 32, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Conv2d(32, 64, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Conv2d(64, 128, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Flatten(),
- nn.Linear(1152, 256),
- nn.ReLU(),
- nn.Linear(256, len(LABELS)),
-)
-state_dict = torch.load('pytorch_model.bin', map_location='cpu')
-class_model.load_state_dict(state_dict, strict=False)
-class_model.eval()
-
-
-model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
-processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
-df = pd.read_csv('clip.csv')
-embeddings_npy = np.load('clip.npy')
-embeddings = np.divide(embeddings_npy, np.sqrt(np.sum(embeddings_npy**2, axis=1, keepdims=True)))
-
-
-def compute_text_embeddings(list_of_strings):
- inputs = processor(text=list_of_strings, return_tensors="pt", padding=True)
- return model.get_text_features(**inputs)
-
-
-def compute_image_embeddings(list_of_images):
- inputs = processor(images=list_of_images, return_tensors="pt", padding=True)
- return model.get_image_features(**inputs)
-
-
-def load_image(image, same_height=False):
- # im = Image.open(path)
- im = Image.fromarray(np.uint8(image))
- if im.mode != 'RGB':
- im = im.convert('RGB')
- if same_height:
- ratio = 224/im.size[1]
- return im.resize((int(im.size[0]*ratio), int(im.size[1]*ratio)))
- else:
- ratio = 224/min(im.size)
- return im.resize((int(im.size[0]*ratio), int(im.size[1]*ratio)))
-
-
-def download_img(identifier, url):
- local_path = f"{identifier}.jpg"
- if not os.path.isfile(local_path):
- img_data = requests.get(url).content
- with open(local_path, 'wb') as handler:
- handler.write(img_data)
- return local_path
-
-
-def predict(image=None, text=None, sketch=None):
- if image is not None:
- input_embeddings = compute_image_embeddings([load_image(image)]).detach().numpy()
- topk = {"local": 100}
- else:
- if text:
- query = text
- topk = {text: 100}
- else:
- x = torch.tensor(sketch, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255.
- with torch.no_grad():
- out = class_model(x)
- probabilities = torch.nn.functional.softmax(out[0], dim=0)
- values, indices = torch.topk(probabilities, 5)
- query = LABELS[indices[0]]
- topk = {LABELS[i]: v.item() for i, v in zip(indices, values)}
- input_embeddings = compute_text_embeddings([query]).detach().numpy()
-
- n_results = 3
- results = np.argsort((embeddings @ input_embeddings.T)[:, 0])[-1:-n_results - 1:-1]
- outputs = [download_img(df.iloc[i]['id'], df.iloc[i]['thumbnail']) for i in results]
- outputs.insert(0, topk)
- print(outputs)
- return outputs
-
-
-def predict_sketch(sketch):
- return predict(None, None, sketch)
-
-
-title = "Draw to search in the Nasjonalbiblioteket"
-description = "Find images in the Nasjonalbiblioteket image collections based on what you draw"
-interface = gr.Interface(
- fn=predict_sketch,
- inputs=["sketchpad"],
- outputs=[gr.outputs.Label(num_top_classes=3), gr.outputs.Image(type="file"), gr.outputs.Image(type="file"), gr.outputs.Image(type="file")],
- title=title,
- description=description,
- live=True
-)
-interface.launch(debug=True)
diff --git a/spaces/Nee001/bing0/src/app/page.tsx b/spaces/Nee001/bing0/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/Nephele/bert-vits2-multi-voice/README.md b/spaces/Nephele/bert-vits2-multi-voice/README.md
deleted file mode 100644
index 4bc82c964ea7c936979f0931f515b896e2eb1732..0000000000000000000000000000000000000000
--- a/spaces/Nephele/bert-vits2-multi-voice/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 多角色语音TTS
-emoji: ✨
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/NeuralInternet/chattensor-prompt-generator-v12/app.py b/spaces/NeuralInternet/chattensor-prompt-generator-v12/app.py
deleted file mode 100644
index ed7f04ba397322381680dc00dc4b7251275404d5..0000000000000000000000000000000000000000
--- a/spaces/NeuralInternet/chattensor-prompt-generator-v12/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-import gradio as gr
-
-tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompt-generator-v12")
-model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompt-generator-v12", from_tf=True)
-
-def generate(prompt):
-
- batch = tokenizer(prompt, return_tensors="pt")
- generated_ids = model.generate(batch["input_ids"], max_new_tokens=150)
- output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
- return output[0]
-
-input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer")
-output_component = gr.Textbox(label = "Prompt")
-examples = [["photographer"], ["developer"]]
-description = "This app generates Chattensor prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻🚀🧑🏻🎨🧑🏻🔬🧑🏻💻🧑🏼🏫🧑🏽🌾"
-gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "Chaττensor Prompt Generator v12", description=description).launch()
diff --git a/spaces/NoCrypt/miku/app.py b/spaces/NoCrypt/miku/app.py
deleted file mode 100644
index 3874ddcb20a1ee2ad665b8620becc1ec559d8027..0000000000000000000000000000000000000000
--- a/spaces/NoCrypt/miku/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import time
-
-import gradio as gr
-from gradio.themes.utils.theme_dropdown import create_theme_dropdown
-
-dropdown, js = create_theme_dropdown()
-
-with gr.Blocks(theme='NoCrypt/miku') as demo:
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=10):
- gr.Markdown(
- """
- # Theme preview: `miku`
- To use this theme, set `theme='NoCrypt/miku'` in `gr.Blocks()` or `gr.Interface()`.
- You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version
- of this theme.
- """
- )
- with gr.Column(scale=3):
- with gr.Box():
- dropdown.render()
- toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True)
-
- dropdown.change(None, dropdown, None, _js=js)
- toggle_dark.click(
- None,
- _js="""
- () => {
- document.body.classList.toggle('dark');
- }
- """,
- )
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False)
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://i.ibb.co/F4hKFrZ/dark-miku.webp",
- label="Image",
- ).style(height=320)
- with gr.Row():
- go_btn = gr.Button("Go", label="Primary Button", variant="primary")
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://i.ibb.co/0rfK9Wm/light-miku-faded.webp"
-
- go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go")
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1").style(size="sm")
- btn2 = gr.UploadButton().style(size="sm")
- stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style(
- size="sm"
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON"
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4")
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ]
- ).style(height="200px", grid=2)
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/latent_depth/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/latent_depth/README.md
deleted file mode 100644
index 7774c333053b95d15b180fdfc3ee3cd817790520..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/latent_depth/README.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Deep Transformers with Latent Depth (Li et al., 2020)
-
-[https://arxiv.org/abs/2009.13102](https://arxiv.org/abs/2009.13102).
-
-## Introduction
-
-We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair.
-
-## Training a multilingual model with latent depth
-
-Below is an example of training with latent depth in decoder for one-to-many (O2M) related languages. We use the same preprocessed (numberized and binarized) TED8 dataset as in [Balancing Training for Multilingual Neural Machine Translation (Wang et al., 2020)](https://github.com/cindyxinyiwang/multiDDS), which could be generated by [the script](https://github.com/cindyxinyiwang/multiDDS/blob/multiDDS/util_scripts/prepare_multilingual_data.sh) the author provided.
-```bash
-lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur"
-databin_dir=
-
-fairseq-train ${databin_dir} \
- --user-dir examples/latent_depth/latent_depth_src \
- --lang-pairs "${lang_pairs_str}" \
- --arch multilingual_transformer_iwslt_de_en \
- --task multilingual_translation_latent_depth \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --share-encoders \
- --share-decoders \
- --decoder-langtok \
- --share-decoder-input-output-embed \
- --dropout 0.3 --attention-dropout 0.3 \
- --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \
- --lr-scheduler inverse_sqrt --stop-min-lr 1e-9 --warmup-init-lr 1e-7 --warmup-updates 8000 \
- --max-tokens 4096 --update-freq 1 \
- --lr 0.0015 \
- --clip-norm 1.0 \
- --seed 2 \
- --ddp-backend=legacy_ddp \
- --encoder-layers 12 \
- --decoder-layers 24 \
- --decoder-latent-layer \
- --sparsity-weight 0.1 \
- --anneal-updates 5000 \
- --soft-update 500 \
- --target-layers 12 \
- --share-weight 0.1
-```
-## Inference command
-
-```bash
-lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur"
-databin_dir=
-model_path=
-src_lang=
-tgt_lang=
-gen_data=
-
-fairseq-generate ${databin_dir} \
- --path ${model_path} \
- --task multilingual_translation_latent_depth \
- --decoder-latent-layer \
- --lang-pairs "${lang_pairs_str}" \
- -s ${src_lang} -t ${tgt_lang} \
- --gen-subset $gen_data \
- --scoring sacrebleu \
- --remove-bpe 'sentencepiece' \
- --lenpen 1.0 \
- --beam 5 \
- --decoder-langtok \
- --max-tokens 4096
-```
-
-
-## Citation
-```bibtex
-@article{li2020deep,
- title={Deep Transformers with Latent Depth},
- author={Li, Xian and Stickland, Asa Cooper and Tang, Yuqing and Kong, Xiang},
- journal={arXiv preprint arXiv:2009.13102},
- year={2020}
-}
-```
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/transformer/transformer_base.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/transformer/transformer_base.py
deleted file mode 100644
index b4d5604dbbae979b424650882d33b45ebab323e6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/transformer/transformer_base.py
+++ /dev/null
@@ -1,179 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List, Optional, Tuple
-
-import torch
-import torch.nn as nn
-from fairseq import utils
-from fairseq.dataclass.utils import gen_parser_from_dataclass
-from fairseq.distributed import fsdp_wrap
-from fairseq.models import FairseqEncoderDecoderModel
-from fairseq.models.transformer import (
- TransformerEncoderBase,
- TransformerDecoderBase,
- TransformerConfig,
-)
-from torch import Tensor
-
-
-class TransformerModelBase(FairseqEncoderDecoderModel):
- """
- Transformer model from `"Attention Is All You Need" (Vaswani, et al, 2017)
- `_.
-
- Args:
- encoder (TransformerEncoder): the encoder
- decoder (TransformerDecoder): the decoder
-
- The Transformer model provides the following named architectures and
- command-line arguments:
-
- .. argparse::
- :ref: fairseq.models.transformer_parser
- :prog:
- """
-
- def __init__(self, cfg, encoder, decoder):
- super().__init__(encoder, decoder)
- self.cfg = cfg
- self.supports_align_args = True
-
- @classmethod
- def add_args(cls, parser):
- """Add model-specific arguments to the parser."""
- # we want to build the args recursively in this case.
- gen_parser_from_dataclass(
- parser, TransformerConfig(), delete_default=False, with_prefix=""
- )
-
- @classmethod
- def build_model(cls, cfg, task):
- """Build a new model instance."""
-
- # -- TODO T96535332
- # bug caused by interaction between OmegaConf II and argparsing
- cfg.decoder.input_dim = int(cfg.decoder.input_dim)
- cfg.decoder.output_dim = int(cfg.decoder.output_dim)
- # --
-
- if cfg.encoder.layers_to_keep:
- cfg.encoder.layers = len(cfg.encoder.layers_to_keep.split(","))
- if cfg.decoder.layers_to_keep:
- cfg.decoder.layers = len(cfg.decoder.layers_to_keep.split(","))
-
- src_dict, tgt_dict = task.source_dictionary, task.target_dictionary
-
- if cfg.share_all_embeddings:
- if src_dict != tgt_dict:
- raise ValueError("--share-all-embeddings requires a joined dictionary")
- if cfg.encoder.embed_dim != cfg.decoder.embed_dim:
- raise ValueError(
- "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim"
- )
- if cfg.decoder.embed_path and (
- cfg.decoder.embed_path != cfg.encoder.embed_path
- ):
- raise ValueError(
- "--share-all-embeddings not compatible with --decoder-embed-path"
- )
- encoder_embed_tokens = cls.build_embedding(
- cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path
- )
- decoder_embed_tokens = encoder_embed_tokens
- cfg.share_decoder_input_output_embed = True
- else:
- encoder_embed_tokens = cls.build_embedding(
- cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path
- )
- decoder_embed_tokens = cls.build_embedding(
- cfg, tgt_dict, cfg.decoder.embed_dim, cfg.decoder.embed_path
- )
- if cfg.offload_activations:
- cfg.checkpoint_activations = True # offloading implies checkpointing
- encoder = cls.build_encoder(cfg, src_dict, encoder_embed_tokens)
- decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens)
- if not cfg.share_all_embeddings:
- # fsdp_wrap is a no-op when --ddp-backend != fully_sharded
- encoder = fsdp_wrap(encoder, min_num_params=cfg.min_params_to_wrap)
- decoder = fsdp_wrap(decoder, min_num_params=cfg.min_params_to_wrap)
- return cls(cfg, encoder, decoder)
-
- @classmethod
- def build_embedding(cls, cfg, dictionary, embed_dim, path=None):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
-
- emb = Embedding(num_embeddings, embed_dim, padding_idx)
- # if provided, load from preloaded dictionaries
- if path:
- embed_dict = utils.parse_embedding(path)
- utils.load_embedding(embed_dict, dictionary, emb)
- return emb
-
- @classmethod
- def build_encoder(cls, cfg, src_dict, embed_tokens):
- return TransformerEncoderBase(cfg, src_dict, embed_tokens)
-
- @classmethod
- def build_decoder(cls, cfg, tgt_dict, embed_tokens):
- return TransformerDecoderBase(
- cfg,
- tgt_dict,
- embed_tokens,
- no_encoder_attn=cfg.no_cross_attention,
- )
-
- # TorchScript doesn't support optional arguments with variable length (**kwargs).
- # Current workaround is to add union of all arguments in child classes.
- def forward(
- self,
- src_tokens,
- src_lengths,
- prev_output_tokens,
- return_all_hiddens: bool = True,
- features_only: bool = False,
- alignment_layer: Optional[int] = None,
- alignment_heads: Optional[int] = None,
- ):
- """
- Run the forward pass for an encoder-decoder model.
-
- Copied from the base class, but without ``**kwargs``,
- which are not supported by TorchScript.
- """
- encoder_out = self.encoder(
- src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens
- )
- decoder_out = self.decoder(
- prev_output_tokens,
- encoder_out=encoder_out,
- features_only=features_only,
- alignment_layer=alignment_layer,
- alignment_heads=alignment_heads,
- src_lengths=src_lengths,
- return_all_hiddens=return_all_hiddens,
- )
- return decoder_out
-
- # Since get_normalized_probs is in the Fairseq Model which is not scriptable,
- # I rewrite the get_normalized_probs from Base Class to call the
- # helper function in the Base Class.
- @torch.jit.export
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- """Get normalized probabilities (or log probs) from a net's output."""
- return self.get_normalized_probs_scriptable(net_output, log_probs, sample)
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_fp16_optimizer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_fp16_optimizer.py
deleted file mode 100644
index ce4f1c055ce68b8e3933636fae66cca73c5e9d18..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_fp16_optimizer.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import copy
-import logging
-import unittest
-
-import torch
-from fairseq.optim.fp16_optimizer import FP16Optimizer, MemoryEfficientFP16Optimizer
-from omegaconf import OmegaConf
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestGradientScaling(unittest.TestCase):
- def setUp(self):
- self.x = torch.tensor([2.0]).cuda().half()
- weight = 3.0
- bias = 5.0
- self.error = 1.0
- self.target = torch.tensor([self.x * weight + bias + self.error]).cuda().half()
- self.loss_fn = torch.nn.L1Loss()
-
- self.model = torch.nn.Linear(1, 1)
- self.model.weight.data = torch.tensor([[weight]])
- self.model.bias.data = torch.tensor([bias])
- self.model.cuda().half()
- self.params = list(self.model.parameters())
-
- self.cfg_dls = OmegaConf.create(
- {
- "optimization": {
- "lr": [0.1],
- },
- "optimizer": {
- "_name": "adam",
- "lr": [0.1],
- "adam_betas": "(0.9, 0.999)",
- "adam_eps": 1e-8,
- "weight_decay": 0.0,
- },
- "common": {
- "fp16_init_scale": 1,
- "fp16_scale_window": 1,
- "fp16_scale_tolerance": 1,
- "threshold_loss_scale": 1,
- "min_loss_scale": 1e-4,
- "tpu": False,
- },
- }
- )
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def run_iter(self, model, params, optimizer):
- optimizer.zero_grad()
- y = model(self.x)
- loss = self.loss_fn(y, self.target)
- optimizer.backward(loss)
- self.assertEqual(loss, torch.tensor(1.0, device="cuda:0", dtype=torch.float16))
-
- grad_norm = optimizer.clip_grad_norm(0)
- self.assertAlmostEqual(grad_norm.item(), 2.2361, 4)
-
- optimizer.step()
- self.assertEqual(
- model.weight,
- torch.tensor(
- [[3.0996]], device="cuda:0", dtype=torch.float16, requires_grad=True
- ),
- )
- self.assertEqual(
- model.bias,
- torch.tensor(
- [5.1016], device="cuda:0", dtype=torch.float16, requires_grad=True
- ),
- )
- self.assertEqual(optimizer.scaler.loss_scale, 2.0)
-
- def test_mixed_precision(self):
- model = copy.deepcopy(self.model)
- params = list(model.parameters())
- optimizer = FP16Optimizer.build_optimizer(self.cfg_dls, params)
-
- self.run_iter(model, params, optimizer)
- self.assertTrue(
- all(
- torch.all(
- fp32_params.eq(
- torch.tensor(
- [3.1000, 5.1000], device="cuda:0", requires_grad=True
- )
- )
- )
- for fp32_params in optimizer.fp32_params.values()
- )
- )
-
- def test_memory_efficient(self):
- model = copy.deepcopy(self.model)
- params = list(model.parameters())
- optimizer = MemoryEfficientFP16Optimizer.build_optimizer(self.cfg_dls, params)
-
- self.run_iter(model, params, optimizer)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py
deleted file mode 100644
index 56d63e3e1b5a036e0adf32480e2b66f371738013..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-
-import torch
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from omegaconf import II
-
-
-@dataclass
-class LabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass):
- label_smoothing: float = field(
- default=0.0,
- metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"},
- )
- report_accuracy: bool = field(
- default=False,
- metadata={"help": "report accuracy metric"},
- )
- ignore_prefix_size: int = field(
- default=0,
- metadata={"help": "Ignore first N tokens"},
- )
- sentence_avg: bool = II("optimization.sentence_avg")
-
-
-def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=None, reduce=True):
- if target.dim() == lprobs.dim() - 1:
- target = target.unsqueeze(-1)
- nll_loss = -lprobs.gather(dim=-1, index=target)
- smooth_loss = -lprobs.sum(dim=-1, keepdim=True)
- if ignore_index is not None:
- pad_mask = target.eq(ignore_index)
- nll_loss.masked_fill_(pad_mask, 0.0)
- smooth_loss.masked_fill_(pad_mask, 0.0)
- else:
- nll_loss = nll_loss.squeeze(-1)
- smooth_loss = smooth_loss.squeeze(-1)
- if reduce:
- nll_loss = nll_loss.sum()
- smooth_loss = smooth_loss.sum()
- eps_i = epsilon / (lprobs.size(-1) - 1)
- loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss
- return loss, nll_loss
-
-
-@register_criterion(
- "label_smoothed_cross_entropy", dataclass=LabelSmoothedCrossEntropyCriterionConfig
-)
-class LabelSmoothedCrossEntropyCriterion(FairseqCriterion):
- def __init__(
- self,
- task,
- sentence_avg,
- label_smoothing,
- ignore_prefix_size=0,
- report_accuracy=False,
- ):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- self.eps = label_smoothing
- self.ignore_prefix_size = ignore_prefix_size
- self.report_accuracy = report_accuracy
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- net_output = model(**sample["net_input"])
- loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce)
- sample_size = (
- sample["target"].size(0) if self.sentence_avg else sample["ntokens"]
- )
- logging_output = {
- "loss": loss.data,
- "nll_loss": nll_loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["target"].size(0),
- "sample_size": sample_size,
- }
- if self.report_accuracy:
- n_correct, total = self.compute_accuracy(model, net_output, sample)
- logging_output["n_correct"] = utils.item(n_correct.data)
- logging_output["total"] = utils.item(total.data)
- return loss, sample_size, logging_output
-
- def get_lprobs_and_target(self, model, net_output, sample):
- lprobs = model.get_normalized_probs(net_output, log_probs=True)
- target = model.get_targets(sample, net_output)
- if self.ignore_prefix_size > 0:
- if getattr(lprobs, "batch_first", False):
- lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous()
- target = target[:, self.ignore_prefix_size :].contiguous()
- else:
- lprobs = lprobs[self.ignore_prefix_size :, :, :].contiguous()
- target = target[self.ignore_prefix_size :, :].contiguous()
- return lprobs.view(-1, lprobs.size(-1)), target.view(-1)
-
- def compute_loss(self, model, net_output, sample, reduce=True):
- lprobs, target = self.get_lprobs_and_target(model, net_output, sample)
- loss, nll_loss = label_smoothed_nll_loss(
- lprobs,
- target,
- self.eps,
- ignore_index=self.padding_idx,
- reduce=reduce,
- )
- return loss, nll_loss
-
- def compute_accuracy(self, model, net_output, sample):
- lprobs, target = self.get_lprobs_and_target(model, net_output, sample)
- mask = target.ne(self.padding_idx)
- n_correct = torch.sum(
- lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))
- )
- total = torch.sum(mask)
- return n_correct, total
-
- @classmethod
- def reduce_metrics(cls, logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- metrics.log_scalar(
- "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
-
- total = utils.item(sum(log.get("total", 0) for log in logging_outputs))
- if total > 0:
- metrics.log_scalar("total", total)
- n_correct = utils.item(
- sum(log.get("n_correct", 0) for log in logging_outputs)
- )
- metrics.log_scalar("n_correct", n_correct)
- metrics.log_derived(
- "accuracy",
- lambda meters: round(
- meters["n_correct"].sum * 100.0 / meters["total"].sum, 3
- )
- if meters["total"].sum > 0
- else float("nan"),
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_io.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_io.py
deleted file mode 100644
index dba663d4aafeb925ddffa50f5055933d6531a069..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_io.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import shutil
-from typing import List, Optional
-
-
-logger = logging.getLogger(__file__)
-
-
-try:
- from iopath.common.file_io import g_pathmgr as IOPathManager
-
- try:
- # [FB only - for now] AWS PathHandler for PathManager
- from .fb_pathhandlers import S3PathHandler
-
- IOPathManager.register_handler(S3PathHandler())
- except KeyError:
- logging.warning("S3PathHandler already registered.")
- except ImportError:
- logging.debug(
- "S3PathHandler couldn't be imported. Either missing fb-only files, or boto3 module."
- )
-
-except ImportError:
- IOPathManager = None
-
-
-class PathManager:
- """
- Wrapper for insulating OSS I/O (using Python builtin operations) from
- iopath's PathManager abstraction (for transparently handling various
- internal backends).
- """
-
- @staticmethod
- def open(
- path: str,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ):
- if IOPathManager:
- return IOPathManager.open(
- path=path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
- return open(
- path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
-
- @staticmethod
- def copy(src_path: str, dst_path: str, overwrite: bool = False) -> bool:
- if IOPathManager:
- return IOPathManager.copy(
- src_path=src_path, dst_path=dst_path, overwrite=overwrite
- )
- return shutil.copyfile(src_path, dst_path)
-
- @staticmethod
- def get_local_path(path: str, **kwargs) -> str:
- if IOPathManager:
- return IOPathManager.get_local_path(path, **kwargs)
- return path
-
- @staticmethod
- def exists(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.exists(path)
- return os.path.exists(path)
-
- @staticmethod
- def isfile(path: str) -> bool:
- if IOPathManager:
- return IOPathManager.isfile(path)
- return os.path.isfile(path)
-
- @staticmethod
- def ls(path: str) -> List[str]:
- if IOPathManager:
- return IOPathManager.ls(path)
- return os.listdir(path)
-
- @staticmethod
- def mkdirs(path: str) -> None:
- if IOPathManager:
- return IOPathManager.mkdirs(path)
- os.makedirs(path, exist_ok=True)
-
- @staticmethod
- def rm(path: str) -> None:
- if IOPathManager:
- return IOPathManager.rm(path)
- os.remove(path)
-
- @staticmethod
- def chmod(path: str, mode: int) -> None:
- if not PathManager.path_requires_pathmanager(path):
- os.chmod(path, mode)
-
- @staticmethod
- def register_handler(handler) -> None:
- if IOPathManager:
- return IOPathManager.register_handler(handler=handler)
-
- @staticmethod
- def copy_from_local(
- local_path: str, dst_path: str, overwrite: bool = False, **kwargs
- ) -> None:
- if IOPathManager:
- return IOPathManager.copy_from_local(
- local_path=local_path, dst_path=dst_path, overwrite=overwrite, **kwargs
- )
- return shutil.copyfile(local_path, dst_path)
-
- @staticmethod
- def path_requires_pathmanager(path: str) -> bool:
- """Do we require PathManager to access given path?"""
- if IOPathManager:
- for p in IOPathManager._path_handlers.keys():
- if path.startswith(p):
- return True
- return False
-
- @staticmethod
- def supports_rename(path: str) -> bool:
- # PathManager doesn't yet support renames
- return not PathManager.path_requires_pathmanager(path)
-
- @staticmethod
- def rename(src: str, dst: str):
- os.rename(src, dst)
-
- """
- ioPath async PathManager methods:
- """
- @staticmethod
- def opena(
- path: str,
- mode: str = "r",
- buffering: int = -1,
- encoding: Optional[str] = None,
- errors: Optional[str] = None,
- newline: Optional[str] = None,
- ):
- """
- Return file descriptor with asynchronous write operations.
- """
- global IOPathManager
- if not IOPathManager:
- logging.info("ioPath is initializing PathManager.")
- try:
- from iopath.common.file_io import PathManager
- IOPathManager = PathManager()
- except Exception:
- logging.exception("Failed to initialize ioPath PathManager object.")
- return IOPathManager.opena(
- path=path,
- mode=mode,
- buffering=buffering,
- encoding=encoding,
- errors=errors,
- newline=newline,
- )
-
- @staticmethod
- def async_close() -> bool:
- """
- Wait for files to be written and clean up asynchronous PathManager.
- NOTE: `PathManager.async_close()` must be called at the end of any
- script that uses `PathManager.opena(...)`.
- """
- global IOPathManager
- if IOPathManager:
- return IOPathManager.async_close()
- return False
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_utils.py
deleted file mode 100644
index d1d5ea65746682881264e4a9c462854dcfb3413f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/file_utils.py
+++ /dev/null
@@ -1,369 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utilities for working with the local dataset cache.
-This file is adapted from `AllenNLP `_.
-and `huggingface `_.
-"""
-
-import fnmatch
-import json
-import logging
-import os
-import shutil
-import tarfile
-import tempfile
-from functools import partial, wraps
-from hashlib import sha256
-from io import open
-
-
-try:
- from torch.hub import _get_torch_home
-
- torch_cache_home = _get_torch_home()
-except ImportError:
- torch_cache_home = os.path.expanduser(
- os.getenv(
- "TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "torch")
- )
- )
-default_cache_path = os.path.join(torch_cache_home, "pytorch_fairseq")
-
-try:
- from urllib.parse import urlparse
-except ImportError:
- from urlparse import urlparse
-
-try:
- from pathlib import Path
-
- PYTORCH_FAIRSEQ_CACHE = Path(os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path))
-except (AttributeError, ImportError):
- PYTORCH_FAIRSEQ_CACHE = os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path)
-
-CONFIG_NAME = "config.json"
-WEIGHTS_NAME = "pytorch_model.bin"
-
-logger = logging.getLogger(__name__) # pylint: disable=invalid-name
-
-
-def load_archive_file(archive_file):
- # redirect to the cache, if necessary
- try:
- resolved_archive_file = cached_path(archive_file, cache_dir=None)
- except EnvironmentError:
- logger.info(
- "Archive name '{}' was not found in archive name list. "
- "We assumed '{}' was a path or URL but couldn't find any file "
- "associated to this path or URL.".format(
- archive_file,
- archive_file,
- )
- )
- return None
-
- if resolved_archive_file == archive_file:
- logger.info("loading archive file {}".format(archive_file))
- else:
- logger.info(
- "loading archive file {} from cache at {}".format(
- archive_file, resolved_archive_file
- )
- )
-
- # Extract archive to temp dir and replace .tar.bz2 if necessary
- tempdir = None
- if not os.path.isdir(resolved_archive_file):
- tempdir = tempfile.mkdtemp()
- logger.info(
- "extracting archive file {} to temp dir {}".format(
- resolved_archive_file, tempdir
- )
- )
- ext = os.path.splitext(archive_file)[1][1:]
- with tarfile.open(resolved_archive_file, "r:" + ext) as archive:
- top_dir = os.path.commonprefix(archive.getnames())
- archive.extractall(tempdir)
- os.remove(resolved_archive_file)
- shutil.move(os.path.join(tempdir, top_dir), resolved_archive_file)
- shutil.rmtree(tempdir)
-
- return resolved_archive_file
-
-
-def url_to_filename(url, etag=None):
- """
- Convert `url` into a hashed filename in a repeatable way.
- If `etag` is specified, append its hash to the URL's, delimited
- by a period.
- """
- url_bytes = url.encode("utf-8")
- url_hash = sha256(url_bytes)
- filename = url_hash.hexdigest()
-
- if etag:
- etag_bytes = etag.encode("utf-8")
- etag_hash = sha256(etag_bytes)
- filename += "." + etag_hash.hexdigest()
-
- return filename
-
-
-def filename_to_url(filename, cache_dir=None):
- """
- Return the url and etag (which may be ``None``) stored for `filename`.
- Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist.
- """
- if cache_dir is None:
- cache_dir = PYTORCH_FAIRSEQ_CACHE
- if isinstance(cache_dir, Path):
- cache_dir = str(cache_dir)
-
- cache_path = os.path.join(cache_dir, filename)
- if not os.path.exists(cache_path):
- raise EnvironmentError("file {} not found".format(cache_path))
-
- meta_path = cache_path + ".json"
- if not os.path.exists(meta_path):
- raise EnvironmentError("file {} not found".format(meta_path))
-
- with open(meta_path, encoding="utf-8") as meta_file:
- metadata = json.load(meta_file)
- url = metadata["url"]
- etag = metadata["etag"]
-
- return url, etag
-
-
-def cached_path_from_pm(url_or_filename):
- """
- Tries to cache the specified URL using PathManager class.
- Returns the cached path if success otherwise failure.
- """
- try:
- from fairseq.file_io import PathManager
- local_path = PathManager.get_local_path(url_or_filename)
- return local_path
- except Exception:
- return None
-
-
-def cached_path(url_or_filename, cache_dir=None):
- """
- Given something that might be a URL (or might be a local path),
- determine which. If it's a URL, download the file and cache it, and
- return the path to the cached file. If it's already a local path,
- make sure the file exists and then return the path.
- """
- if cache_dir is None:
- cache_dir = PYTORCH_FAIRSEQ_CACHE
- if isinstance(url_or_filename, Path):
- url_or_filename = str(url_or_filename)
- if isinstance(cache_dir, Path):
- cache_dir = str(cache_dir)
-
- parsed = urlparse(url_or_filename)
-
- if parsed.scheme in ("http", "https", "s3"):
- # URL, so get it from the cache (downloading if necessary)
- return get_from_cache(url_or_filename, cache_dir)
- elif os.path.exists(url_or_filename):
- # File, and it exists.
- return url_or_filename
- elif parsed.scheme == "":
- # File, but it doesn't exist.
- raise EnvironmentError("file {} not found".format(url_or_filename))
- else:
- cached_path = cached_path_from_pm(url_or_filename)
- if cached_path:
- return cached_path
- # Something unknown
- raise ValueError(
- "unable to parse {} as a URL or as a local path".format(url_or_filename)
- )
-
-
-def split_s3_path(url):
- """Split a full s3 path into the bucket name and path."""
- parsed = urlparse(url)
- if not parsed.netloc or not parsed.path:
- raise ValueError("bad s3 path {}".format(url))
- bucket_name = parsed.netloc
- s3_path = parsed.path
- # Remove '/' at beginning of path.
- if s3_path.startswith("/"):
- s3_path = s3_path[1:]
- return bucket_name, s3_path
-
-
-def s3_request(func):
- """
- Wrapper function for s3 requests in order to create more helpful error
- messages.
- """
-
- @wraps(func)
- def wrapper(url, *args, **kwargs):
- from botocore.exceptions import ClientError
-
- try:
- return func(url, *args, **kwargs)
- except ClientError as exc:
- if int(exc.response["Error"]["Code"]) == 404:
- raise EnvironmentError("file {} not found".format(url))
- else:
- raise
-
- return wrapper
-
-
-@s3_request
-def s3_etag(url):
- """Check ETag on S3 object."""
- import boto3
-
- s3_resource = boto3.resource("s3")
- bucket_name, s3_path = split_s3_path(url)
- s3_object = s3_resource.Object(bucket_name, s3_path)
- return s3_object.e_tag
-
-
-@s3_request
-def s3_get(url, temp_file):
- """Pull a file directly from S3."""
- import boto3
-
- s3_resource = boto3.resource("s3")
- bucket_name, s3_path = split_s3_path(url)
- s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file)
-
-
-def request_wrap_timeout(func, url):
- import requests
-
- for attempt, timeout in enumerate([10, 20, 40, 60, 60]):
- try:
- return func(timeout=timeout)
- except requests.exceptions.Timeout as e:
- logger.warning(
- "Request for %s timed-out (attempt %d). Retrying with a timeout of %d secs",
- url,
- attempt,
- timeout,
- exc_info=e,
- )
- continue
- raise RuntimeError(f"Unable to fetch file {url}")
-
-
-def http_get(url, temp_file):
- import requests
- from tqdm import tqdm
-
- req = request_wrap_timeout(partial(requests.get, url, stream=True), url)
- content_length = req.headers.get("Content-Length")
- total = int(content_length) if content_length is not None else None
- progress = tqdm(unit="B", total=total)
- for chunk in req.iter_content(chunk_size=1024):
- if chunk: # filter out keep-alive new chunks
- progress.update(len(chunk))
- temp_file.write(chunk)
- progress.close()
-
-
-def get_from_cache(url, cache_dir=None):
- """
- Given a URL, look for the corresponding dataset in the local cache.
- If it's not there, download it. Then return the path to the cached file.
- """
- if cache_dir is None:
- cache_dir = PYTORCH_FAIRSEQ_CACHE
- if isinstance(cache_dir, Path):
- cache_dir = str(cache_dir)
-
- if not os.path.exists(cache_dir):
- os.makedirs(cache_dir)
-
- # Get eTag to add to filename, if it exists.
- if url.startswith("s3://"):
- etag = s3_etag(url)
- else:
- try:
- import requests
-
- response = request_wrap_timeout(
- partial(requests.head, url, allow_redirects=True), url
- )
- if response.status_code != 200:
- etag = None
- else:
- etag = response.headers.get("ETag")
- except RuntimeError:
- etag = None
-
- filename = url_to_filename(url, etag)
-
- # get cache path to put the file
- cache_path = os.path.join(cache_dir, filename)
-
- # If we don't have a connection (etag is None) and can't identify the file
- # try to get the last downloaded one
- if not os.path.exists(cache_path) and etag is None:
- matching_files = fnmatch.filter(os.listdir(cache_dir), filename + ".*")
- matching_files = list(filter(lambda s: not s.endswith(".json"), matching_files))
- if matching_files:
- cache_path = os.path.join(cache_dir, matching_files[-1])
-
- if not os.path.exists(cache_path):
- # Download to temporary file, then copy to cache dir once finished.
- # Otherwise you get corrupt cache entries if the download gets interrupted.
- with tempfile.NamedTemporaryFile() as temp_file:
- logger.info("%s not found in cache, downloading to %s", url, temp_file.name)
-
- # GET file object
- if url.startswith("s3://"):
- s3_get(url, temp_file)
- else:
- http_get(url, temp_file)
-
- # we are copying the file before closing it, so flush to avoid truncation
- temp_file.flush()
- # shutil.copyfileobj() starts at the current position, so go to the start
- temp_file.seek(0)
-
- logger.info("copying %s to cache at %s", temp_file.name, cache_path)
- with open(cache_path, "wb") as cache_file:
- shutil.copyfileobj(temp_file, cache_file)
-
- logger.info("creating metadata file for %s", cache_path)
- meta = {"url": url, "etag": etag}
- meta_path = cache_path + ".json"
- with open(meta_path, "w") as meta_file:
- output_string = json.dumps(meta)
- meta_file.write(output_string)
-
- logger.info("removing temp file %s", temp_file.name)
-
- return cache_path
-
-
-def read_set_from_file(filename):
- """
- Extract a de-duped collection (set) of text from a file.
- Expected file format is one item per line.
- """
- collection = set()
- with open(filename, "r", encoding="utf-8") as file_:
- for line in file_:
- collection.add(line.rstrip())
- return collection
-
-
-def get_file_extension(path, dot=True, lower=True):
- ext = os.path.splitext(path)[1]
- ext = ext if dot else ext[1:]
- return ext.lower() if lower else ext
diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/damo/damo_text2_video.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/damo/damo_text2_video.py
deleted file mode 100644
index 9da07b424fd5124f2ce58a3bf0798bc9931cf4c5..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/damo/damo_text2_video.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import gradio as gr
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from diffusers.utils import export_to_video
-
-from video_diffusion.utils.scheduler_list import diff_scheduler_list, get_scheduler_list
-
-stable_model_list =["damo-vilab/text-to-video-ms-1.7b","cerspense/zeroscope_v2_576w"]
-
-class DamoText2VideoGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, stable_model, scheduler):
- if self.pipe is None:
- self.pipe = DiffusionPipeline.from_pretrained(
- stable_model, torch_dtype=torch.float16, variant="fp16"
- )
- self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler)
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
- return self.pipe
-
- def generate_video(
- self,
- prompt: str,
- negative_prompt: str,
- stable_model:str,
- num_frames: int,
- num_inference_steps: int,
- guidance_scale: int,
- height: int,
- width: int,
- scheduler: str,
- ):
- pipe = self.load_model(stable_model=stable_model, scheduler=scheduler)
- video = pipe(
- prompt,
- negative_prompt=negative_prompt,
- num_frames=int(num_frames),
- height=height,
- width=width,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- ).frames
-
- video_path = export_to_video(video)
- return video_path
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- dano_text2video_prompt = gr.Textbox(lines=1, placeholder="Prompt", show_label=False)
- dano_text2video_negative_prompt = gr.Textbox(
- lines=1, placeholder="Negative Prompt", show_label=False
- )
- with gr.Row():
- with gr.Column():
- dano_text2video_model_list = gr.Dropdown(
- choices=stable_model_list,
- label="Model List",
- value=stable_model_list[0],
- )
-
- dano_text2video_num_inference_steps = gr.Slider(
- minimum=1,
- maximum=100,
- value=50,
- step=1,
- label="Inference Steps",
- )
- dano_text2video_guidance_scale = gr.Slider(
- minimum=1,
- maximum=15,
- value=7,
- step=1,
- label="Guidance Scale",
- )
- dano_text2video_num_frames = gr.Slider(
- minimum=1,
- maximum=50,
- value=16,
- step=1,
- label="Number of Frames",
- )
- with gr.Row():
- with gr.Column():
- dano_text2video_height = gr.Slider(
- minimum=128,
- maximum=1280,
- value=512,
- step=32,
- label="Height",
- )
- dano_text2video_width = gr.Slider(
- minimum=128,
- maximum=1280,
- value=512,
- step=32,
- label="Width",
- )
- damo_text2video_scheduler = gr.Dropdown(
- choices=diff_scheduler_list,
- label="Scheduler",
- value=diff_scheduler_list[6],
- )
- dano_text2video_generate = gr.Button(value="Generator")
- with gr.Column():
- dano_output = gr.Video(label="Output")
-
- dano_text2video_generate.click(
- fn=DamoText2VideoGenerator().generate_video,
- inputs=[
- dano_text2video_prompt,
- dano_text2video_negative_prompt,
- dano_text2video_model_list,
- dano_text2video_num_frames,
- dano_text2video_num_inference_steps,
- dano_text2video_guidance_scale,
- dano_text2video_height,
- dano_text2video_width,
- damo_text2video_scheduler,
- ],
- outputs=dano_output,
- )
diff --git a/spaces/Open-Orca/Mistral-7B-OpenOrca/README.md b/spaces/Open-Orca/Mistral-7B-OpenOrca/README.md
deleted file mode 100644
index 7ffb454821facb43247f8aa3cfed3a79cd36b941..0000000000000000000000000000000000000000
--- a/spaces/Open-Orca/Mistral-7B-OpenOrca/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mistral-7B-OpenOrca
-emoji: 🌊
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py
deleted file mode 100644
index 93258242a90695cc94a7c6bd41562d6a75988771..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/lraspp_m-v3-d8.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- backbone=dict(
- type='MobileNetV3',
- arch='large',
- out_indices=(1, 3, 16),
- norm_cfg=norm_cfg),
- decode_head=dict(
- type='LRASPPHead',
- in_channels=(16, 24, 960),
- in_index=(0, 1, 2),
- channels=128,
- input_transform='multiple_select',
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/dice_loss.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/dice_loss.py
deleted file mode 100644
index 27a77b962d7d8b3079c7d6cd9db52280c6fb4970..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/losses/dice_loss.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/
-segmentron/solver/loss.py (Apache-2.0 License)"""
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import get_class_weight, weighted_loss
-
-
-@weighted_loss
-def dice_loss(pred,
- target,
- valid_mask,
- smooth=1,
- exponent=2,
- class_weight=None,
- ignore_index=255):
- assert pred.shape[0] == target.shape[0]
- total_loss = 0
- num_classes = pred.shape[1]
- for i in range(num_classes):
- if i != ignore_index:
- dice_loss = binary_dice_loss(
- pred[:, i],
- target[..., i],
- valid_mask=valid_mask,
- smooth=smooth,
- exponent=exponent)
- if class_weight is not None:
- dice_loss *= class_weight[i]
- total_loss += dice_loss
- return total_loss / num_classes
-
-
-@weighted_loss
-def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards):
- assert pred.shape[0] == target.shape[0]
- pred = pred.reshape(pred.shape[0], -1)
- target = target.reshape(target.shape[0], -1)
- valid_mask = valid_mask.reshape(valid_mask.shape[0], -1)
-
- num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth
- den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth
-
- return 1 - num / den
-
-
-@LOSSES.register_module()
-class DiceLoss(nn.Module):
- """DiceLoss.
-
- This loss is proposed in `V-Net: Fully Convolutional Neural Networks for
- Volumetric Medical Image Segmentation `_.
-
- Args:
- loss_type (str, optional): Binary or multi-class loss.
- Default: 'multi_class'. Options are "binary" and "multi_class".
- smooth (float): A float number to smooth loss, and avoid NaN error.
- Default: 1
- exponent (float): An float number to calculate denominator
- value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2.
- reduction (str, optional): The method used to reduce the loss. Options
- are "none", "mean" and "sum". This parameter only works when
- per_image is True. Default: 'mean'.
- class_weight (list[float] | str, optional): Weight of each class. If in
- str format, read them from a file. Defaults to None.
- loss_weight (float, optional): Weight of the loss. Default to 1.0.
- ignore_index (int | None): The label index to be ignored. Default: 255.
- """
-
- def __init__(self,
- smooth=1,
- exponent=2,
- reduction='mean',
- class_weight=None,
- loss_weight=1.0,
- ignore_index=255,
- **kwards):
- super(DiceLoss, self).__init__()
- self.smooth = smooth
- self.exponent = exponent
- self.reduction = reduction
- self.class_weight = get_class_weight(class_weight)
- self.loss_weight = loss_weight
- self.ignore_index = ignore_index
-
- def forward(self,
- pred,
- target,
- avg_factor=None,
- reduction_override=None,
- **kwards):
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.class_weight is not None:
- class_weight = pred.new_tensor(self.class_weight)
- else:
- class_weight = None
-
- pred = F.softmax(pred, dim=1)
- num_classes = pred.shape[1]
- one_hot_target = F.one_hot(
- torch.clamp(target.long(), 0, num_classes - 1),
- num_classes=num_classes)
- valid_mask = (target != self.ignore_index).long()
-
- loss = self.loss_weight * dice_loss(
- pred,
- one_hot_target,
- valid_mask=valid_mask,
- reduction=reduction,
- avg_factor=avg_factor,
- smooth=self.smooth,
- exponent=self.exponent,
- class_weight=class_weight,
- ignore_index=self.ignore_index)
- return loss
diff --git a/spaces/PY007/TinyLlama-Chat/share_btn.py b/spaces/PY007/TinyLlama-Chat/share_btn.py
deleted file mode 100644
index 8ff61abe298d71349f565b5d47228986b42d1f96..0000000000000000000000000000000000000000
--- a/spaces/PY007/TinyLlama-Chat/share_btn.py
+++ /dev/null
@@ -1,98 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- // const gradioEl = document.querySelector('body > gradio-app');
- const gradioEl = document.querySelector("gradio-app");
- const inputTxt = gradioEl.querySelector('#q-input textarea').value;
- const outputTxt = gradioEl.querySelector('#q-output').outerHTML;
- const titleLength = 150;
- let titleTxt = inputTxt;
- if(titleTxt.length > titleLength){
- titleTxt = titleTxt.slice(0, titleLength) + ' ...';
- }
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!inputTxt || !outputTxt){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const descriptionMd = `### Question:
-${inputTxt}
-### Answer:
-${outputTxt}`;
- const params = {
- title: titleTxt,
- description: descriptionMd,
- };
- const paramsStr = Object.entries(params)
- .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`)
- .join('&');
- window.open(`https://huggingface.co/spaces/HuggingFaceH4/star-chat-demo/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
-
-share_btn_css = """
-a {text-decoration-line: underline; font-weight: 600;}
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from { transform: rotate(0deg); }
- to { transform: rotate(360deg); }
-}
-#share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
-}
-#share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
-}
-#share-btn * {
- all: unset;
-}
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-#share-btn-container .wrap {
- display: none !important;
-}
-"""
\ No newline at end of file
diff --git a/spaces/PeepDaSlan9/AutoGPT/ui/api.py b/spaces/PeepDaSlan9/AutoGPT/ui/api.py
deleted file mode 100644
index 3b46ad32148b23f06c6eb64c88708fc2bf92e4dc..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/ui/api.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import os, sys
-import utils
-import uuid
-import json
-import subprocess, threading
-
-FILE_DIR = os.path.dirname(os.path.abspath(__file__))
-REPO_DIR = os.path.dirname(FILE_DIR)
-STATE_DIR = os.path.join(FILE_DIR, "state")
-sys.path.append(REPO_DIR)
-if not os.path.exists(STATE_DIR):
- os.mkdir(STATE_DIR)
-import time
-
-
-def get_openai_api_key():
- return os.getenv("OPENAI_API_KEY")
-
-
-running_apis = []
-
-
-def get_state(state_file):
- with open(state_file, "r") as f:
- state = json.load(f)
- return state
-
-
-def set_state(state_file, state):
- with open(state_file, "w") as f:
- json.dump(state, f)
-
-
-class AutoAPI:
- def __init__(self, openai_key, ai_name, ai_role, top_5_goals):
- self.openai_key = openai_key
- hex = uuid.uuid4().hex
- print(hex)
- self.state_file = os.path.join(STATE_DIR, f"state_{hex}.json")
- self.log_file = os.path.join(STATE_DIR, f"log_{hex}.json")
-
- newline = "\n"
- with open(os.path.join(REPO_DIR, "ai_settings.yaml"), "w") as f:
- f.write(
- f"""ai_goals:
-{newline.join([f'- {goal[0]}' for goal in top_5_goals if goal[0]])}
-ai_name: {ai_name}
-ai_role: {ai_role}
-"""
- )
- state = {
- "pending_input": None,
- "awaiting_input": False,
- "messages": [],
- "last_message_read_index": -1,
- }
- set_state(self.state_file, state)
-
- with open(self.log_file, "w") as f:
- subprocess.Popen(
- [
- "python",
- os.path.join(REPO_DIR, "ui", "api.py"),
- openai_key,
- self.state_file,
- ],
- cwd=REPO_DIR,
- stdout=f,
- stderr=f,
- )
-
- def send_message(self, message="Y"):
- state = get_state(self.state_file)
- state["pending_input"] = message
- state["awaiting_input"] = False
- set_state(self.state_file, state)
-
- def get_chatbot_response(self):
- while True:
- state = get_state(self.state_file)
- if (
- state["awaiting_input"]
- and state["last_message_read_index"] >= len(state["messages"]) - 1
- ):
- break
- if state["last_message_read_index"] >= len(state["messages"]) - 1:
- time.sleep(1)
- else:
- state["last_message_read_index"] += 1
- title, content = state["messages"][state["last_message_read_index"]]
- yield (f"**{title.strip()}** " if title else "") + utils.remove_color(
- content
- ).replace("\n", " ")
- set_state(self.state_file, state)
-
-
-if __name__ == "__main__":
- print(sys.argv)
- _, openai_key, state_file = sys.argv
- os.environ["OPENAI_API_KEY"] = openai_key
- import autogpt.config.config
- from autogpt.logs import logger
- from autogpt.cli import main
- import autogpt.utils
- from autogpt.spinner import Spinner
-
- def add_message(title, content):
- state = get_state(state_file)
- state["messages"].append((title, content))
- set_state(state_file, state)
-
- def typewriter_log(title="", title_color="", content="", *args, **kwargs):
- add_message(title, content)
-
- def warn(message, title="", *args, **kwargs):
- add_message(title, message)
-
- def error(title, message="", *args, **kwargs):
- add_message(title, message)
-
- def clean_input(prompt=""):
- add_message(None, prompt)
- state = get_state(state_file)
- state["awaiting_input"] = True
- set_state(state_file, state)
- while state["pending_input"] is None:
- state = get_state(state_file)
- print("Waiting for input...")
- time.sleep(1)
- print("Got input")
- pending_input = state["pending_input"]
- state["pending_input"] = None
- set_state(state_file, state)
- return pending_input
-
- def spinner_start():
- add_message(None, "Thinking...")
-
- logger.typewriter_log = typewriter_log
- logger.warn = warn
- logger.error = error
- autogpt.utils.clean_input = clean_input
- Spinner.spin = spinner_start
-
- sys.argv = sys.argv[:1]
- main()
diff --git a/spaces/PeepDaSlan9/HuggingFaceH4-zephyr-7b-alpha/README.md b/spaces/PeepDaSlan9/HuggingFaceH4-zephyr-7b-alpha/README.md
deleted file mode 100644
index a33e4ea8d2e649608eeceb20a3cda28b8f0511c9..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/HuggingFaceH4-zephyr-7b-alpha/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: HuggingFaceH4 Zephyr 7b Alpha
-emoji: 🐨
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/QINGCHE/TSA/run.py b/spaces/QINGCHE/TSA/run.py
deleted file mode 100644
index b618dcc861711c8ad47b22fd167bb14464a8f2e5..0000000000000000000000000000000000000000
--- a/spaces/QINGCHE/TSA/run.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import util
-import abstract
-import classification
-import inference
-import outline
-from inference import BertClassificationModel
-# input:file/text,topic_num,max_length,output_choice
-# output:file/text/topic_sentence
-
-
-def texClear(article):
- sentencesCleared = [util.clean_text(sentence) for sentence in article]
- sentencesCleared = [string for string in sentencesCleared if string != '' ]
- # print(sentencesCleared)
- return sentencesCleared
-
-def textToAb(sentences, article, topic_num, max_length):
- central_sentences = abstract.abstruct_main(sentences, topic_num)
- groups = classification.classify_by_topic(article, central_sentences)
- groups = util.article_to_group(groups, central_sentences)
- title_dict,title = util.generation(groups, max_length)
- # ans:
- # {Ai_abstruct:(main_sentence,paragraph)}
- # print(title)
- matrix = inference.inference_matrix(title)
-
- outl,outline_list = outline.passage_outline(matrix,title)
-
- output = util.formate_text(title_dict,outline_list)
-
- return outl, output
\ No newline at end of file
diff --git a/spaces/RMXK/RVC_HFF/infer/lib/train/data_utils.py b/spaces/RMXK/RVC_HFF/infer/lib/train/data_utils.py
deleted file mode 100644
index 51a176cceba860acf79157ed0bad2b82c8e80406..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/lib/train/data_utils.py
+++ /dev/null
@@ -1,517 +0,0 @@
-import os
-import traceback
-import logging
-
-logger = logging.getLogger(__name__)
-
-import numpy as np
-import torch
-import torch.utils.data
-
-from infer.lib.train.mel_processing import spectrogram_torch
-from infer.lib.train.utils import load_filepaths_and_text, load_wav_to_torch
-
-
-class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- pitch = audiopath_and_text[2]
- pitchf = audiopath_and_text[3]
- dv = audiopath_and_text[4]
-
- phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- # print(123,phone.shape,pitch.shape,spec.shape)
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- # amor
- len_wav = len_min * self.hop_length
-
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
-
- phone = phone[:len_min, :]
- pitch = pitch[:len_min]
- pitchf = pitchf[:len_min]
-
- return (spec, wav, phone, pitch, pitchf, dv)
-
- def get_labels(self, phone, pitch, pitchf):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- pitch = np.load(pitch)
- pitchf = np.load(pitchf)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- # print(234,phone.shape,pitch.shape)
- phone = phone[:n_num, :]
- pitch = pitch[:n_num]
- pitchf = pitchf[:n_num]
- phone = torch.FloatTensor(phone)
- pitch = torch.LongTensor(pitch)
- pitchf = torch.FloatTensor(pitchf)
- return phone, pitch, pitchf
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- logger.warn("%s %s", spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollateMultiNSFsid:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- ) # (spec, wav, phone, pitch)
- pitch_padded = torch.LongTensor(len(batch), max_phone_len)
- pitchf_padded = torch.FloatTensor(len(batch), max_phone_len)
- phone_padded.zero_()
- pitch_padded.zero_()
- pitchf_padded.zero_()
- # dv = torch.FloatTensor(len(batch), 256)#gin=256
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- pitch = row[3]
- pitch_padded[i, : pitch.size(0)] = pitch
- pitchf = row[4]
- pitchf_padded[i, : pitchf.size(0)] = pitchf
-
- # dv[i] = row[5]
- sid[i] = row[5]
-
- return (
- phone_padded,
- phone_lengths,
- pitch_padded,
- pitchf_padded,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- # dv
- sid,
- )
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- dv = audiopath_and_text[2]
-
- phone = self.get_labels(phone)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- len_wav = len_min * self.hop_length
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
- phone = phone[:len_min, :]
- return (spec, wav, phone, dv)
-
- def get_labels(self, phone):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- phone = phone[:n_num, :]
- phone = torch.FloatTensor(phone)
- return phone
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- logger.warn("%s %s", spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- )
- phone_padded.zero_()
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- sid[i] = row[3]
-
- return (
- phone_padded,
- phone_lengths,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- sid,
- )
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(
- self,
- dataset,
- batch_size,
- boundaries,
- num_replicas=None,
- rank=None,
- shuffle=True,
- ):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, -1, -1): #
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (
- total_batch_size - (len_bucket % total_batch_size)
- ) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = (
- ids_bucket
- + ids_bucket * (rem // len_bucket)
- + ids_bucket[: (rem % len_bucket)]
- )
-
- # subsample
- ids_bucket = ids_bucket[self.rank :: self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [
- bucket[idx]
- for idx in ids_bucket[
- j * self.batch_size : (j + 1) * self.batch_size
- ]
- ]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/layers_33966KB.py
deleted file mode 100644
index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/layers_33966KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/build.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/build.py
deleted file mode 100644
index b30909c8704a5954ef5250ef890ed4cb1d50cf07..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/build.py
+++ /dev/null
@@ -1,126 +0,0 @@
-"""Build a project using PEP 517 hooks.
-"""
-import argparse
-import logging
-import os
-import shutil
-import tempfile
-
-from ._compat import tomllib
-from .envbuild import BuildEnvironment
-from .wrappers import Pep517HookCaller
-
-log = logging.getLogger(__name__)
-
-
-def validate_system(system):
- """
- Ensure build system has the requisite fields.
- """
- required = {'requires', 'build-backend'}
- if not (required <= set(system)):
- message = "Missing required fields: {missing}".format(
- missing=required-set(system),
- )
- raise ValueError(message)
-
-
-def load_system(source_dir):
- """
- Load the build system from a source dir (pyproject.toml).
- """
- pyproject = os.path.join(source_dir, 'pyproject.toml')
- with open(pyproject, 'rb') as f:
- pyproject_data = tomllib.load(f)
- return pyproject_data['build-system']
-
-
-def compat_system(source_dir):
- """
- Given a source dir, attempt to get a build system backend
- and requirements from pyproject.toml. Fallback to
- setuptools but only if the file was not found or a build
- system was not indicated.
- """
- try:
- system = load_system(source_dir)
- except (FileNotFoundError, KeyError):
- system = {}
- system.setdefault(
- 'build-backend',
- 'setuptools.build_meta:__legacy__',
- )
- system.setdefault('requires', ['setuptools', 'wheel'])
- return system
-
-
-def _do_build(hooks, env, dist, dest):
- get_requires_name = 'get_requires_for_build_{dist}'.format(**locals())
- get_requires = getattr(hooks, get_requires_name)
- reqs = get_requires({})
- log.info('Got build requires: %s', reqs)
-
- env.pip_install(reqs)
- log.info('Installed dynamic build dependencies')
-
- with tempfile.TemporaryDirectory() as td:
- log.info('Trying to build %s in %s', dist, td)
- build_name = 'build_{dist}'.format(**locals())
- build = getattr(hooks, build_name)
- filename = build(td, {})
- source = os.path.join(td, filename)
- shutil.move(source, os.path.join(dest, os.path.basename(filename)))
-
-
-def build(source_dir, dist, dest=None, system=None):
- system = system or load_system(source_dir)
- dest = os.path.join(source_dir, dest or 'dist')
- os.makedirs(dest, exist_ok=True)
-
- validate_system(system)
- hooks = Pep517HookCaller(
- source_dir, system['build-backend'], system.get('backend-path')
- )
-
- with BuildEnvironment() as env:
- env.pip_install(system['requires'])
- _do_build(hooks, env, dist, dest)
-
-
-parser = argparse.ArgumentParser()
-parser.add_argument(
- 'source_dir',
- help="A directory containing pyproject.toml",
-)
-parser.add_argument(
- '--binary', '-b',
- action='store_true',
- default=False,
-)
-parser.add_argument(
- '--source', '-s',
- action='store_true',
- default=False,
-)
-parser.add_argument(
- '--out-dir', '-o',
- help="Destination in which to save the builds relative to source dir",
-)
-
-
-def main(args):
- log.warning('pep517.build is deprecated. '
- 'Consider switching to https://pypi.org/project/build/')
-
- # determine which dists to build
- dists = list(filter(None, (
- 'sdist' if args.source or not args.binary else None,
- 'wheel' if args.binary or not args.source else None,
- )))
-
- for dist in dists:
- build(args.source_dir, dist, args.out_dir)
-
-
-if __name__ == '__main__':
- main(parser.parse_args())
diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/CMU/pipeline.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/CMU/pipeline.py
deleted file mode 100644
index 788dc7b0aac14de81237684b653d970d1c7ec19e..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/pipelines/CMU/pipeline.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from pathlib import Path
-import argparse
-
-from ... import extract_features, match_features, triangulation, logger
-from ... import pairs_from_covisibility, pairs_from_retrieval, localize_sfm
-
-TEST_SLICES = [2, 3, 4, 5, 6, 13, 14, 15, 16, 17, 18, 19, 20, 21]
-
-
-def generate_query_list(dataset, path, slice_):
- cameras = {}
- with open(dataset / "intrinsics.txt", "r") as f:
- for line in f.readlines():
- if line[0] == "#" or line == "\n":
- continue
- data = line.split()
- cameras[data[0]] = data[1:]
- assert len(cameras) == 2
-
- queries = dataset / f"{slice_}/test-images-{slice_}.txt"
- with open(queries, "r") as f:
- queries = [q.rstrip("\n") for q in f.readlines()]
-
- out = [[q] + cameras[q.split("_")[2]] for q in queries]
- with open(path, "w") as f:
- f.write("\n".join(map(" ".join, out)))
-
-
-def run_slice(slice_, root, outputs, num_covis, num_loc):
- dataset = root / slice_
- ref_images = dataset / "database"
- query_images = dataset / "query"
- sift_sfm = dataset / "sparse"
-
- outputs = outputs / slice_
- outputs.mkdir(exist_ok=True, parents=True)
- query_list = dataset / "queries_with_intrinsics.txt"
- sfm_pairs = outputs / f"pairs-db-covis{num_covis}.txt"
- loc_pairs = outputs / f"pairs-query-netvlad{num_loc}.txt"
- ref_sfm = outputs / "sfm_superpoint+superglue"
- results = outputs / f"CMU_hloc_superpoint+superglue_netvlad{num_loc}.txt"
-
- # pick one of the configurations for extraction and matching
- retrieval_conf = extract_features.confs["netvlad"]
- feature_conf = extract_features.confs["superpoint_aachen"]
- matcher_conf = match_features.confs["superglue"]
-
- pairs_from_covisibility.main(sift_sfm, sfm_pairs, num_matched=num_covis)
- features = extract_features.main(
- feature_conf, ref_images, outputs, as_half=True
- )
- sfm_matches = match_features.main(
- matcher_conf, sfm_pairs, feature_conf["output"], outputs
- )
- triangulation.main(
- ref_sfm, sift_sfm, ref_images, sfm_pairs, features, sfm_matches
- )
-
- generate_query_list(root, query_list, slice_)
- global_descriptors = extract_features.main(
- retrieval_conf, ref_images, outputs
- )
- global_descriptors = extract_features.main(
- retrieval_conf, query_images, outputs
- )
- pairs_from_retrieval.main(
- global_descriptors,
- loc_pairs,
- num_loc,
- query_list=query_list,
- db_model=ref_sfm,
- )
-
- features = extract_features.main(
- feature_conf, query_images, outputs, as_half=True
- )
- loc_matches = match_features.main(
- matcher_conf, loc_pairs, feature_conf["output"], outputs
- )
-
- localize_sfm.main(
- ref_sfm,
- dataset / "queries/*_time_queries_with_intrinsics.txt",
- loc_pairs,
- features,
- loc_matches,
- results,
- )
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--slices",
- type=str,
- default="*",
- help="a single number, an interval (e.g. 2-6), "
- "or a Python-style list or int (e.g. [2, 3, 4]",
- )
- parser.add_argument(
- "--dataset",
- type=Path,
- default="datasets/cmu_extended",
- help="Path to the dataset, default: %(default)s",
- )
- parser.add_argument(
- "--outputs",
- type=Path,
- default="outputs/aachen_extended",
- help="Path to the output directory, default: %(default)s",
- )
- parser.add_argument(
- "--num_covis",
- type=int,
- default=20,
- help="Number of image pairs for SfM, default: %(default)s",
- )
- parser.add_argument(
- "--num_loc",
- type=int,
- default=10,
- help="Number of image pairs for loc, default: %(default)s",
- )
- args = parser.parse_args()
-
- if args.slice == "*":
- slices = TEST_SLICES
- if "-" in args.slices:
- min_, max_ = args.slices.split("-")
- slices = list(range(int(min_), int(max_) + 1))
- else:
- slices = eval(args.slices)
- if isinstance(slices, int):
- slices = [slices]
-
- for slice_ in slices:
- logger.info("Working on slice %s.", slice_)
- run_slice(
- f"slice{slice_}",
- args.dataset,
- args.outputs,
- args.num_covis,
- args.num_loc,
- )
diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/utils/profiler.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/utils/profiler.py
deleted file mode 100644
index 0275ea34e3eb9cceb4ed809bebeda209749f5bc5..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/utils/profiler.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import torch
-from pytorch_lightning.profiler import SimpleProfiler, PassThroughProfiler
-from contextlib import contextmanager
-from pytorch_lightning.utilities import rank_zero_only
-
-
-class InferenceProfiler(SimpleProfiler):
- """
- This profiler records duration of actions with cuda.synchronize()
- Use this in test time.
- """
-
- def __init__(self):
- super().__init__()
- self.start = rank_zero_only(self.start)
- self.stop = rank_zero_only(self.stop)
- self.summary = rank_zero_only(self.summary)
-
- @contextmanager
- def profile(self, action_name: str) -> None:
- try:
- torch.cuda.synchronize()
- self.start(action_name)
- yield action_name
- finally:
- torch.cuda.synchronize()
- self.stop(action_name)
-
-
-def build_profiler(name):
- if name == "inference":
- return InferenceProfiler()
- elif name == "pytorch":
- from pytorch_lightning.profiler import PyTorchProfiler
-
- return PyTorchProfiler(use_cuda=True, profile_memory=True, row_limit=100)
- elif name is None:
- return PassThroughProfiler()
- else:
- raise ValueError(f"Invalid profiler: {name}")
diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/visualize_util.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/visualize_util.py
deleted file mode 100644
index 2d1aa38bb992302fe504bc166a3fa113e5365337..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/visualize_util.py
+++ /dev/null
@@ -1,635 +0,0 @@
-""" Organize some frequently used visualization functions. """
-import cv2
-import numpy as np
-import matplotlib
-import matplotlib.pyplot as plt
-import copy
-import seaborn as sns
-
-
-# Plot junctions onto the image (return a separate copy)
-def plot_junctions(input_image, junctions, junc_size=3, color=None):
- """
- input_image: can be 0~1 float or 0~255 uint8.
- junctions: Nx2 or 2xN np array.
- junc_size: the size of the plotted circles.
- """
- # Create image copy
- image = copy.copy(input_image)
- # Make sure the image is converted to 255 uint8
- if image.dtype == np.uint8:
- pass
- # A float type image ranging from 0~1
- elif image.dtype in [np.float32, np.float64, np.float] and image.max() <= 2.0:
- image = (image * 255.0).astype(np.uint8)
- # A float type image ranging from 0.~255.
- elif image.dtype in [np.float32, np.float64, np.float] and image.mean() > 10.0:
- image = image.astype(np.uint8)
- else:
- raise ValueError(
- "[Error] Unknown image data type. Expect 0~1 float or 0~255 uint8."
- )
-
- # Check whether the image is single channel
- if len(image.shape) == 2 or ((len(image.shape) == 3) and (image.shape[-1] == 1)):
- # Squeeze to H*W first
- image = image.squeeze()
-
- # Stack to channle 3
- image = np.concatenate([image[..., None] for _ in range(3)], axis=-1)
-
- # Junction dimensions should be N*2
- if not len(junctions.shape) == 2:
- raise ValueError("[Error] junctions should be 2-dim array.")
-
- # Always convert to N*2
- if junctions.shape[-1] != 2:
- if junctions.shape[0] == 2:
- junctions = junctions.T
- else:
- raise ValueError("[Error] At least one of the two dims should be 2.")
-
- # Round and convert junctions to int (and check the boundary)
- H, W = image.shape[:2]
- junctions = (np.round(junctions)).astype(np.int)
- junctions[junctions < 0] = 0
- junctions[junctions[:, 0] >= H, 0] = H - 1 # (first dim) max bounded by H-1
- junctions[junctions[:, 1] >= W, 1] = W - 1 # (second dim) max bounded by W-1
-
- # Iterate through all the junctions
- num_junc = junctions.shape[0]
- if color is None:
- color = (0, 255.0, 0)
- for idx in range(num_junc):
- # Fetch one junction
- junc = junctions[idx, :]
- cv2.circle(
- image, tuple(np.flip(junc)), radius=junc_size, color=color, thickness=3
- )
-
- return image
-
-
-# Plot line segements given junctions and line adjecent map
-def plot_line_segments(
- input_image,
- junctions,
- line_map,
- junc_size=3,
- color=(0, 255.0, 0),
- line_width=1,
- plot_survived_junc=True,
-):
- """
- input_image: can be 0~1 float or 0~255 uint8.
- junctions: Nx2 or 2xN np array.
- line_map: NxN np array
- junc_size: the size of the plotted circles.
- color: color of the line segments (can be string "random")
- line_width: width of the drawn segments.
- plot_survived_junc: whether we only plot the survived junctions.
- """
- # Create image copy
- image = copy.copy(input_image)
- # Make sure the image is converted to 255 uint8
- if image.dtype == np.uint8:
- pass
- # A float type image ranging from 0~1
- elif image.dtype in [np.float32, np.float64, np.float] and image.max() <= 2.0:
- image = (image * 255.0).astype(np.uint8)
- # A float type image ranging from 0.~255.
- elif image.dtype in [np.float32, np.float64, np.float] and image.mean() > 10.0:
- image = image.astype(np.uint8)
- else:
- raise ValueError(
- "[Error] Unknown image data type. Expect 0~1 float or 0~255 uint8."
- )
-
- # Check whether the image is single channel
- if len(image.shape) == 2 or ((len(image.shape) == 3) and (image.shape[-1] == 1)):
- # Squeeze to H*W first
- image = image.squeeze()
-
- # Stack to channle 3
- image = np.concatenate([image[..., None] for _ in range(3)], axis=-1)
-
- # Junction dimensions should be 2
- if not len(junctions.shape) == 2:
- raise ValueError("[Error] junctions should be 2-dim array.")
-
- # Always convert to N*2
- if junctions.shape[-1] != 2:
- if junctions.shape[0] == 2:
- junctions = junctions.T
- else:
- raise ValueError("[Error] At least one of the two dims should be 2.")
-
- # line_map dimension should be 2
- if not len(line_map.shape) == 2:
- raise ValueError("[Error] line_map should be 2-dim array.")
-
- # Color should be "random" or a list or tuple with length 3
- if color != "random":
- if not (isinstance(color, tuple) or isinstance(color, list)):
- raise ValueError("[Error] color should have type list or tuple.")
- else:
- if len(color) != 3:
- raise ValueError(
- "[Error] color should be a list or tuple with length 3."
- )
-
- # Make a copy of the line_map
- line_map_tmp = copy.copy(line_map)
-
- # Parse line_map back to segment pairs
- segments = np.zeros([0, 4])
- for idx in range(junctions.shape[0]):
- # if no connectivity, just skip it
- if line_map_tmp[idx, :].sum() == 0:
- continue
- # record the line segment
- else:
- for idx2 in np.where(line_map_tmp[idx, :] == 1)[0]:
- p1 = np.flip(junctions[idx, :]) # Convert to xy format
- p2 = np.flip(junctions[idx2, :]) # Convert to xy format
- segments = np.concatenate(
- (segments, np.array([p1[0], p1[1], p2[0], p2[1]])[None, ...]),
- axis=0,
- )
-
- # Update line_map
- line_map_tmp[idx, idx2] = 0
- line_map_tmp[idx2, idx] = 0
-
- # Draw segment pairs
- for idx in range(segments.shape[0]):
- seg = np.round(segments[idx, :]).astype(np.int)
- # Decide the color
- if color != "random":
- color = tuple(color)
- else:
- color = tuple(
- np.random.rand(
- 3,
- )
- )
- cv2.line(
- image, tuple(seg[:2]), tuple(seg[2:]), color=color, thickness=line_width
- )
-
- # Also draw the junctions
- if not plot_survived_junc:
- num_junc = junctions.shape[0]
- for idx in range(num_junc):
- # Fetch one junction
- junc = junctions[idx, :]
- cv2.circle(
- image,
- tuple(np.flip(junc)),
- radius=junc_size,
- color=(0, 255.0, 0),
- thickness=3,
- )
- # Only plot the junctions which are part of a line segment
- else:
- for idx in range(segments.shape[0]):
- seg = np.round(segments[idx, :]).astype(np.int) # Already in HW format.
- cv2.circle(
- image,
- tuple(seg[:2]),
- radius=junc_size,
- color=(0, 255.0, 0),
- thickness=3,
- )
- cv2.circle(
- image,
- tuple(seg[2:]),
- radius=junc_size,
- color=(0, 255.0, 0),
- thickness=3,
- )
-
- return image
-
-
-# Plot line segments given Nx4 or Nx2x2 line segments
-def plot_line_segments_from_segments(
- input_image, line_segments, junc_size=3, color=(0, 255.0, 0), line_width=1
-):
- # Create image copy
- image = copy.copy(input_image)
- # Make sure the image is converted to 255 uint8
- if image.dtype == np.uint8:
- pass
- # A float type image ranging from 0~1
- elif image.dtype in [np.float32, np.float64, np.float] and image.max() <= 2.0:
- image = (image * 255.0).astype(np.uint8)
- # A float type image ranging from 0.~255.
- elif image.dtype in [np.float32, np.float64, np.float] and image.mean() > 10.0:
- image = image.astype(np.uint8)
- else:
- raise ValueError(
- "[Error] Unknown image data type. Expect 0~1 float or 0~255 uint8."
- )
-
- # Check whether the image is single channel
- if len(image.shape) == 2 or ((len(image.shape) == 3) and (image.shape[-1] == 1)):
- # Squeeze to H*W first
- image = image.squeeze()
-
- # Stack to channle 3
- image = np.concatenate([image[..., None] for _ in range(3)], axis=-1)
-
- # Check the if line_segments are in (1) Nx4, or (2) Nx2x2.
- H, W, _ = image.shape
- # (1) Nx4 format
- if len(line_segments.shape) == 2 and line_segments.shape[-1] == 4:
- # Round to int32
- line_segments = line_segments.astype(np.int32)
-
- # Clip H dimension
- line_segments[:, 0] = np.clip(line_segments[:, 0], a_min=0, a_max=H - 1)
- line_segments[:, 2] = np.clip(line_segments[:, 2], a_min=0, a_max=H - 1)
-
- # Clip W dimension
- line_segments[:, 1] = np.clip(line_segments[:, 1], a_min=0, a_max=W - 1)
- line_segments[:, 3] = np.clip(line_segments[:, 3], a_min=0, a_max=W - 1)
-
- # Convert to Nx2x2 format
- line_segments = np.concatenate(
- [
- np.expand_dims(line_segments[:, :2], axis=1),
- np.expand_dims(line_segments[:, 2:], axis=1),
- ],
- axis=1,
- )
-
- # (2) Nx2x2 format
- elif len(line_segments.shape) == 3 and line_segments.shape[-1] == 2:
- # Round to int32
- line_segments = line_segments.astype(np.int32)
-
- # Clip H dimension
- line_segments[:, :, 0] = np.clip(line_segments[:, :, 0], a_min=0, a_max=H - 1)
- line_segments[:, :, 1] = np.clip(line_segments[:, :, 1], a_min=0, a_max=W - 1)
-
- else:
- raise ValueError(
- "[Error] line_segments should be either Nx4 or Nx2x2 in HW format."
- )
-
- # Draw segment pairs (all segments should be in HW format)
- image = image.copy()
- for idx in range(line_segments.shape[0]):
- seg = np.round(line_segments[idx, :, :]).astype(np.int32)
- # Decide the color
- if color != "random":
- color = tuple(color)
- else:
- color = tuple(
- np.random.rand(
- 3,
- )
- )
- cv2.line(
- image,
- tuple(np.flip(seg[0, :])),
- tuple(np.flip(seg[1, :])),
- color=color,
- thickness=line_width,
- )
-
- # Also draw the junctions
- cv2.circle(
- image,
- tuple(np.flip(seg[0, :])),
- radius=junc_size,
- color=(0, 255.0, 0),
- thickness=3,
- )
- cv2.circle(
- image,
- tuple(np.flip(seg[1, :])),
- radius=junc_size,
- color=(0, 255.0, 0),
- thickness=3,
- )
-
- return image
-
-
-# Additional functions to visualize multiple images at the same time,
-# e.g. for line matching
-def plot_images(imgs, titles=None, cmaps="gray", dpi=100, size=6, pad=0.5):
- """Plot a set of images horizontally.
- Args:
- imgs: a list of NumPy or PyTorch images, RGB (H, W, 3) or mono (H, W).
- titles: a list of strings, as titles for each image.
- cmaps: colormaps for monochrome images.
- """
- n = len(imgs)
- if not isinstance(cmaps, (list, tuple)):
- cmaps = [cmaps] * n
- figsize = (size * n, size * 3 / 4) if size is not None else None
- fig, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi)
- if n == 1:
- ax = [ax]
- for i in range(n):
- ax[i].imshow(imgs[i], cmap=plt.get_cmap(cmaps[i]))
- ax[i].get_yaxis().set_ticks([])
- ax[i].get_xaxis().set_ticks([])
- ax[i].set_axis_off()
- for spine in ax[i].spines.values(): # remove frame
- spine.set_visible(False)
- if titles:
- ax[i].set_title(titles[i])
- fig.tight_layout(pad=pad)
-
-
-def plot_keypoints(kpts, colors="lime", ps=4):
- """Plot keypoints for existing images.
- Args:
- kpts: list of ndarrays of size (N, 2).
- colors: string, or list of list of tuples (one for each keypoints).
- ps: size of the keypoints as float.
- """
- if not isinstance(colors, list):
- colors = [colors] * len(kpts)
- axes = plt.gcf().axes
- for a, k, c in zip(axes, kpts, colors):
- a.scatter(k[:, 0], k[:, 1], c=c, s=ps, linewidths=0)
-
-
-def plot_matches(kpts0, kpts1, color=None, lw=1.5, ps=4, indices=(0, 1), a=1.0):
- """Plot matches for a pair of existing images.
- Args:
- kpts0, kpts1: corresponding keypoints of size (N, 2).
- color: color of each match, string or RGB tuple. Random if not given.
- lw: width of the lines.
- ps: size of the end points (no endpoint if ps=0)
- indices: indices of the images to draw the matches on.
- a: alpha opacity of the match lines.
- """
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- ax0, ax1 = ax[indices[0]], ax[indices[1]]
- fig.canvas.draw()
-
- assert len(kpts0) == len(kpts1)
- if color is None:
- color = matplotlib.cm.hsv(np.random.rand(len(kpts0))).tolist()
- elif len(color) > 0 and not isinstance(color[0], (tuple, list)):
- color = [color] * len(kpts0)
-
- if lw > 0:
- # transform the points into the figure coordinate system
- transFigure = fig.transFigure.inverted()
- fkpts0 = transFigure.transform(ax0.transData.transform(kpts0))
- fkpts1 = transFigure.transform(ax1.transData.transform(kpts1))
- fig.lines += [
- matplotlib.lines.Line2D(
- (fkpts0[i, 0], fkpts1[i, 0]),
- (fkpts0[i, 1], fkpts1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c=color[i],
- linewidth=lw,
- alpha=a,
- )
- for i in range(len(kpts0))
- ]
-
- # freeze the axes to prevent the transform to change
- ax0.autoscale(enable=False)
- ax1.autoscale(enable=False)
-
- if ps > 0:
- ax0.scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps, zorder=2)
- ax1.scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps, zorder=2)
-
-
-def plot_lines(
- lines, line_colors="orange", point_colors="cyan", ps=4, lw=2, indices=(0, 1)
-):
- """Plot lines and endpoints for existing images.
- Args:
- lines: list of ndarrays of size (N, 2, 2).
- colors: string, or list of list of tuples (one for each keypoints).
- ps: size of the keypoints as float pixels.
- lw: line width as float pixels.
- indices: indices of the images to draw the matches on.
- """
- if not isinstance(line_colors, list):
- line_colors = [line_colors] * len(lines)
- if not isinstance(point_colors, list):
- point_colors = [point_colors] * len(lines)
-
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- axes = [ax[i] for i in indices]
- fig.canvas.draw()
-
- # Plot the lines and junctions
- for a, l, lc, pc in zip(axes, lines, line_colors, point_colors):
- for i in range(len(l)):
- line = matplotlib.lines.Line2D(
- (l[i, 0, 0], l[i, 1, 0]),
- (l[i, 0, 1], l[i, 1, 1]),
- zorder=1,
- c=lc,
- linewidth=lw,
- )
- a.add_line(line)
- pts = l.reshape(-1, 2)
- a.scatter(pts[:, 0], pts[:, 1], c=pc, s=ps, linewidths=0, zorder=2)
-
-
-def plot_line_matches(kpts0, kpts1, color=None, lw=1.5, indices=(0, 1), a=1.0):
- """Plot matches for a pair of existing images, parametrized by their middle point.
- Args:
- kpts0, kpts1: corresponding middle points of the lines of size (N, 2).
- color: color of each match, string or RGB tuple. Random if not given.
- lw: width of the lines.
- indices: indices of the images to draw the matches on.
- a: alpha opacity of the match lines.
- """
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- ax0, ax1 = ax[indices[0]], ax[indices[1]]
- fig.canvas.draw()
-
- assert len(kpts0) == len(kpts1)
- if color is None:
- color = matplotlib.cm.hsv(np.random.rand(len(kpts0))).tolist()
- elif len(color) > 0 and not isinstance(color[0], (tuple, list)):
- color = [color] * len(kpts0)
-
- if lw > 0:
- # transform the points into the figure coordinate system
- transFigure = fig.transFigure.inverted()
- fkpts0 = transFigure.transform(ax0.transData.transform(kpts0))
- fkpts1 = transFigure.transform(ax1.transData.transform(kpts1))
- fig.lines += [
- matplotlib.lines.Line2D(
- (fkpts0[i, 0], fkpts1[i, 0]),
- (fkpts0[i, 1], fkpts1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c=color[i],
- linewidth=lw,
- alpha=a,
- )
- for i in range(len(kpts0))
- ]
-
- # freeze the axes to prevent the transform to change
- ax0.autoscale(enable=False)
- ax1.autoscale(enable=False)
-
-
-def plot_color_line_matches(lines, correct_matches=None, lw=2, indices=(0, 1)):
- """Plot line matches for existing images with multiple colors.
- Args:
- lines: list of ndarrays of size (N, 2, 2).
- correct_matches: bool array of size (N,) indicating correct matches.
- lw: line width as float pixels.
- indices: indices of the images to draw the matches on.
- """
- n_lines = len(lines[0])
- colors = sns.color_palette("husl", n_colors=n_lines)
- np.random.shuffle(colors)
- alphas = np.ones(n_lines)
- # If correct_matches is not None, display wrong matches with a low alpha
- if correct_matches is not None:
- alphas[~np.array(correct_matches)] = 0.2
-
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- axes = [ax[i] for i in indices]
- fig.canvas.draw()
-
- # Plot the lines
- for a, l in zip(axes, lines):
- # Transform the points into the figure coordinate system
- transFigure = fig.transFigure.inverted()
- endpoint0 = transFigure.transform(a.transData.transform(l[:, 0]))
- endpoint1 = transFigure.transform(a.transData.transform(l[:, 1]))
- fig.lines += [
- matplotlib.lines.Line2D(
- (endpoint0[i, 0], endpoint1[i, 0]),
- (endpoint0[i, 1], endpoint1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c=colors[i],
- alpha=alphas[i],
- linewidth=lw,
- )
- for i in range(n_lines)
- ]
-
-
-def plot_color_lines(lines, correct_matches, wrong_matches, lw=2, indices=(0, 1)):
- """Plot line matches for existing images with multiple colors:
- green for correct matches, red for wrong ones, and blue for the rest.
- Args:
- lines: list of ndarrays of size (N, 2, 2).
- correct_matches: list of bool arrays of size N with correct matches.
- wrong_matches: list of bool arrays of size (N,) with correct matches.
- lw: line width as float pixels.
- indices: indices of the images to draw the matches on.
- """
- # palette = sns.color_palette()
- palette = sns.color_palette("hls", 8)
- blue = palette[5] # palette[0]
- red = palette[0] # palette[3]
- green = palette[2] # palette[2]
- colors = [np.array([blue] * len(l)) for l in lines]
- for i, c in enumerate(colors):
- c[np.array(correct_matches[i])] = green
- c[np.array(wrong_matches[i])] = red
-
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- axes = [ax[i] for i in indices]
- fig.canvas.draw()
-
- # Plot the lines
- for a, l, c in zip(axes, lines, colors):
- # Transform the points into the figure coordinate system
- transFigure = fig.transFigure.inverted()
- endpoint0 = transFigure.transform(a.transData.transform(l[:, 0]))
- endpoint1 = transFigure.transform(a.transData.transform(l[:, 1]))
- fig.lines += [
- matplotlib.lines.Line2D(
- (endpoint0[i, 0], endpoint1[i, 0]),
- (endpoint0[i, 1], endpoint1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c=c[i],
- linewidth=lw,
- )
- for i in range(len(l))
- ]
-
-
-def plot_subsegment_matches(lines, subsegments, lw=2, indices=(0, 1)):
- """Plot line matches for existing images with multiple colors and
- highlight the actually matched subsegments.
- Args:
- lines: list of ndarrays of size (N, 2, 2).
- subsegments: list of ndarrays of size (N, 2, 2).
- lw: line width as float pixels.
- indices: indices of the images to draw the matches on.
- """
- n_lines = len(lines[0])
- colors = sns.cubehelix_palette(
- start=2, rot=-0.2, dark=0.3, light=0.7, gamma=1.3, hue=1, n_colors=n_lines
- )
-
- fig = plt.gcf()
- ax = fig.axes
- assert len(ax) > max(indices)
- axes = [ax[i] for i in indices]
- fig.canvas.draw()
-
- # Plot the lines
- for a, l, ss in zip(axes, lines, subsegments):
- # Transform the points into the figure coordinate system
- transFigure = fig.transFigure.inverted()
-
- # Draw full line
- endpoint0 = transFigure.transform(a.transData.transform(l[:, 0]))
- endpoint1 = transFigure.transform(a.transData.transform(l[:, 1]))
- fig.lines += [
- matplotlib.lines.Line2D(
- (endpoint0[i, 0], endpoint1[i, 0]),
- (endpoint0[i, 1], endpoint1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c="red",
- alpha=0.7,
- linewidth=lw,
- )
- for i in range(n_lines)
- ]
-
- # Draw matched subsegment
- endpoint0 = transFigure.transform(a.transData.transform(ss[:, 0]))
- endpoint1 = transFigure.transform(a.transData.transform(ss[:, 1]))
- fig.lines += [
- matplotlib.lines.Line2D(
- (endpoint0[i, 0], endpoint1[i, 0]),
- (endpoint0[i, 1], endpoint1[i, 1]),
- zorder=1,
- transform=fig.transFigure,
- c=colors[i],
- alpha=1,
- linewidth=lw,
- )
- for i in range(n_lines)
- ]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/stare.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/stare.py
deleted file mode 100644
index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/stare.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os.path as osp
-
-from .builder import DATASETS
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class STAREDataset(CustomDataset):
- """STARE dataset.
-
- In segmentation map annotation for STARE, 0 stands for background, which is
- included in 2 categories. ``reduce_zero_label`` is fixed to False. The
- ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
- '.ah.png'.
- """
-
- CLASSES = ('background', 'vessel')
-
- PALETTE = [[120, 120, 120], [6, 230, 230]]
-
- def __init__(self, **kwargs):
- super(STAREDataset, self).__init__(
- img_suffix='.png',
- seg_map_suffix='.ah.png',
- reduce_zero_label=False,
- **kwargs)
- assert osp.exists(self.img_dir)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py
deleted file mode 100644
index aa914b5bb25124d1ff199553d96713d6a80484c0..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import torch
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from annotator.uniformer.mmseg.ops import resize
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-class ASPPModule(nn.ModuleList):
- """Atrous Spatial Pyramid Pooling (ASPP) Module.
-
- Args:
- dilations (tuple[int]): Dilation rate of each layer.
- in_channels (int): Input channels.
- channels (int): Channels after modules, before conv_seg.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg,
- act_cfg):
- super(ASPPModule, self).__init__()
- self.dilations = dilations
- self.in_channels = in_channels
- self.channels = channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- for dilation in dilations:
- self.append(
- ConvModule(
- self.in_channels,
- self.channels,
- 1 if dilation == 1 else 3,
- dilation=dilation,
- padding=0 if dilation == 1 else dilation,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
-
- def forward(self, x):
- """Forward function."""
- aspp_outs = []
- for aspp_module in self:
- aspp_outs.append(aspp_module(x))
-
- return aspp_outs
-
-
-@HEADS.register_module()
-class ASPPHead(BaseDecodeHead):
- """Rethinking Atrous Convolution for Semantic Image Segmentation.
-
- This head is the implementation of `DeepLabV3
- `_.
-
- Args:
- dilations (tuple[int]): Dilation rates for ASPP module.
- Default: (1, 6, 12, 18).
- """
-
- def __init__(self, dilations=(1, 6, 12, 18), **kwargs):
- super(ASPPHead, self).__init__(**kwargs)
- assert isinstance(dilations, (list, tuple))
- self.dilations = dilations
- self.image_pool = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- ConvModule(
- self.in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.aspp_modules = ASPPModule(
- dilations,
- self.in_channels,
- self.channels,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.bottleneck = ConvModule(
- (len(dilations) + 1) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- aspp_outs = [
- resize(
- self.image_pool(x),
- size=x.size()[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- ]
- aspp_outs.extend(self.aspp_modules(x))
- aspp_outs = torch.cat(aspp_outs, dim=1)
- output = self.bottleneck(aspp_outs)
- output = self.cls_seg(output)
- return output
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/data_parallel.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/data_parallel.py
deleted file mode 100644
index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/data_parallel.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from itertools import chain
-
-from torch.nn.parallel import DataParallel
-
-from .scatter_gather import scatter_kwargs
-
-
-class MMDataParallel(DataParallel):
- """The DataParallel module that supports DataContainer.
-
- MMDataParallel has two main differences with PyTorch DataParallel:
-
- - It supports a custom type :class:`DataContainer` which allows more
- flexible control of input data during both GPU and CPU inference.
- - It implement two more APIs ``train_step()`` and ``val_step()``.
-
- Args:
- module (:class:`nn.Module`): Module to be encapsulated.
- device_ids (list[int]): Device IDS of modules to be scattered to.
- Defaults to None when GPU is not available.
- output_device (str | int): Device ID for output. Defaults to None.
- dim (int): Dimension used to scatter the data. Defaults to 0.
- """
-
- def __init__(self, *args, dim=0, **kwargs):
- super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs)
- self.dim = dim
-
- def forward(self, *inputs, **kwargs):
- """Override the original forward function.
-
- The main difference lies in the CPU inference where the data in
- :class:`DataContainers` will still be gathered.
- """
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module(*inputs[0], **kwargs[0])
- else:
- return super().forward(*inputs, **kwargs)
-
- def scatter(self, inputs, kwargs, device_ids):
- return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
-
- def train_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- 'instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- def val_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.val_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- ' instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.val_step(*inputs[0], **kwargs[0])
diff --git a/spaces/SHSH0819/event_detection_app/README.md b/spaces/SHSH0819/event_detection_app/README.md
deleted file mode 100644
index 23fee694bb7609911d4617c04a57f595d1b8a5d9..0000000000000000000000000000000000000000
--- a/spaces/SHSH0819/event_detection_app/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Event Detection App
-emoji: 👀
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/constants.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/constants.py
deleted file mode 100644
index 075bd21c09ba2a5cf11e634a3b75531032272fe8..0000000000000000000000000000000000000000
--- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/constants.py
+++ /dev/null
@@ -1,53 +0,0 @@
-from pathlib import Path
-from PIL import Image
-
-# from dotenv import load_dotenv, find_dotenv # pip install python-dotenv==1.0.0
-
-from __version__ import __VERSION__ as APP_VERSION
-
-_SCRIPT_PATH = Path(__file__).absolute()
-PARENT_APP_DIR = _SCRIPT_PATH.parent
-TEMP_DIR = PARENT_APP_DIR / 'tempDir'
-ROOT_DIR = PARENT_APP_DIR.parent
-STATIC_DIR = ROOT_DIR / 'static'
-
-# _env_file_path = find_dotenv(str(CODE_DIR / '.env')) # Check if this path is correct
-# if _env_file_path:
-# load_dotenv(_env_file_path)
-
-ST_CONFIG = {
- "page_title": "NTT Data - Chat Q&A",
- # "page_icon": Image.open(STATIC_DIR / "mini_nttdata.jpg"),
-}
-
-OPERATING_MODE = "debug" # debug, preproduction, production
-
-REUSE_ANSWERS = False
-
-LOAD_INDEX_LOCALLY = False
-SAVE_INDEX_LOCALLY = False
-
-# x$ per 1000 tokens
-PRICES = {
- 'text-embedding-ada-002': 0.0004,
- 'text-davinci-003': 0.02,
- 'gpt-3': 0.002,
- 'gpt-4': 0.06, # 8K context
-}
-
-SOURCES_IDS = {
- # "Without source. Only chat": 4,
- "local files": 1,
- "urls": 3
-}
-
-TYPE_IDS = {
- "OpenAI": 2,
- "MSF Azure OpenAI Service": 1,
-}
-
-
-INDEX_IDS = {
- "FAISS": 1,
- "Pinecone": 2,
-}
diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/safety_checker.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/safety_checker.py
deleted file mode 100644
index 09de92eeb1ec7e64863839012b1eddba444ad80a..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/stable_diffusion/safety_checker.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-
-from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel
-
-from ...utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-
-def cosine_distance(image_embeds, text_embeds):
- normalized_image_embeds = nn.functional.normalize(image_embeds)
- normalized_text_embeds = nn.functional.normalize(text_embeds)
- return torch.mm(normalized_image_embeds, normalized_text_embeds.t())
-
-
-class StableDiffusionSafetyChecker(PreTrainedModel):
- config_class = CLIPConfig
-
- def __init__(self, config: CLIPConfig):
- super().__init__(config)
-
- self.vision_model = CLIPVisionModel(config.vision_config)
- self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False)
-
- self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False)
- self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False)
-
- self.register_buffer("concept_embeds_weights", torch.ones(17))
- self.register_buffer("special_care_embeds_weights", torch.ones(3))
-
- @torch.no_grad()
- def forward(self, clip_input, images):
- pooled_output = self.vision_model(clip_input)[1] # pooled_output
- image_embeds = self.visual_projection(pooled_output)
-
- special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().numpy()
- cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().numpy()
-
- result = []
- batch_size = image_embeds.shape[0]
- for i in range(batch_size):
- result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []}
-
- # increase this value to create a stronger `nfsw` filter
- # at the cost of increasing the possibility of filtering benign images
- adjustment = 0.0
-
- for concet_idx in range(len(special_cos_dist[0])):
- concept_cos = special_cos_dist[i][concet_idx]
- concept_threshold = self.special_care_embeds_weights[concet_idx].item()
- result_img["special_scores"][concet_idx] = round(concept_cos - concept_threshold + adjustment, 3)
- if result_img["special_scores"][concet_idx] > 0:
- result_img["special_care"].append({concet_idx, result_img["special_scores"][concet_idx]})
- adjustment = 0.01
-
- for concet_idx in range(len(cos_dist[0])):
- concept_cos = cos_dist[i][concet_idx]
- concept_threshold = self.concept_embeds_weights[concet_idx].item()
- result_img["concept_scores"][concet_idx] = round(concept_cos - concept_threshold + adjustment, 3)
- if result_img["concept_scores"][concet_idx] > 0:
- result_img["bad_concepts"].append(concet_idx)
-
- result.append(result_img)
-
- has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result]
-
- for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
- if has_nsfw_concept:
- images[idx] = np.zeros(images[idx].shape) # black image
-
- if any(has_nsfw_concepts):
- logger.warning(
- "Potential NSFW content was detected in one or more images. A black image will be returned instead."
- " Try again with a different prompt and/or seed."
- )
-
- return images, has_nsfw_concepts
-
- @torch.inference_mode()
- def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.FloatTensor):
- pooled_output = self.vision_model(clip_input)[1] # pooled_output
- image_embeds = self.visual_projection(pooled_output)
-
- special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds)
- cos_dist = cosine_distance(image_embeds, self.concept_embeds)
-
- # increase this value to create a stronger `nsfw` filter
- # at the cost of increasing the possibility of filtering benign images
- adjustment = 0.0
-
- special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment
- # special_scores = special_scores.round(decimals=3)
- special_care = torch.any(special_scores > 0, dim=1)
- special_adjustment = special_care * 0.01
- special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1])
-
- concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment
- # concept_scores = concept_scores.round(decimals=3)
- has_nsfw_concepts = torch.any(concept_scores > 0, dim=1)
-
- images[has_nsfw_concepts] = 0.0 # black image
-
- return images, has_nsfw_concepts
diff --git a/spaces/Salesforce/EDICT/my_diffusers/utils/outputs.py b/spaces/Salesforce/EDICT/my_diffusers/utils/outputs.py
deleted file mode 100644
index b02f62d02d0322401fd9926aca9f792a4696cc1e..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/utils/outputs.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Generic utilities
-"""
-
-import warnings
-from collections import OrderedDict
-from dataclasses import fields
-from typing import Any, Tuple
-
-import numpy as np
-
-from .import_utils import is_torch_available
-
-
-def is_tensor(x):
- """
- Tests if `x` is a `torch.Tensor` or `np.ndarray`.
- """
- if is_torch_available():
- import torch
-
- if isinstance(x, torch.Tensor):
- return True
-
- return isinstance(x, np.ndarray)
-
-
-class BaseOutput(OrderedDict):
- """
- Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a
- tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular
- python dictionary.
-
-
-
- You can't unpack a `BaseOutput` directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert it to a tuple
- before.
-
-
- """
-
- def __post_init__(self):
- class_fields = fields(self)
-
- # Safety and consistency checks
- if not len(class_fields):
- raise ValueError(f"{self.__class__.__name__} has no fields.")
-
- for field in class_fields:
- v = getattr(self, field.name)
- if v is not None:
- self[field.name] = v
-
- def __delitem__(self, *args, **kwargs):
- raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.")
-
- def setdefault(self, *args, **kwargs):
- raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.")
-
- def pop(self, *args, **kwargs):
- raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.")
-
- def update(self, *args, **kwargs):
- raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.")
-
- def __getitem__(self, k):
- if isinstance(k, str):
- inner_dict = {k: v for (k, v) in self.items()}
- if self.__class__.__name__ in ["StableDiffusionPipelineOutput", "ImagePipelineOutput"] and k == "sample":
- warnings.warn(
- "The keyword 'samples' is deprecated and will be removed in version 0.4.0. Please use `.images` or"
- " `'images'` instead.",
- DeprecationWarning,
- )
- return inner_dict["images"]
- return inner_dict[k]
- else:
- return self.to_tuple()[k]
-
- def __setattr__(self, name, value):
- if name in self.keys() and value is not None:
- # Don't call self.__setitem__ to avoid recursion errors
- super().__setitem__(name, value)
- super().__setattr__(name, value)
-
- def __setitem__(self, key, value):
- # Will raise a KeyException if needed
- super().__setitem__(key, value)
- # Don't call self.__setattr__ to avoid recursion errors
- super().__setattr__(key, value)
-
- def to_tuple(self) -> Tuple[Any]:
- """
- Convert self to a tuple containing all the attributes/keys that are not `None`.
- """
- return tuple(self[k] for k in self.keys())
diff --git a/spaces/Samuelcr8/EVA/app.py b/spaces/Samuelcr8/EVA/app.py
deleted file mode 100644
index 4205e03f91904065e1610f7e6c7b2f1de1771184..0000000000000000000000000000000000000000
--- a/spaces/Samuelcr8/EVA/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/gpt2").launch()
\ No newline at end of file
diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/online_demo.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/online_demo.py
deleted file mode 100644
index d20562c921ce9e7f2bbc132321012812785f21da..0000000000000000000000000000000000000000
--- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/online_demo.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import torch
-from torch.autograd import Variable
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-
-import torch.nn as nn
-import torch.utils.data
-import numpy as np
-from opt import opt
-
-from dataloader import WebcamLoader, DataWriter, crop_from_dets, Mscoco
-from yolo.darknet import Darknet
-from yolo.util import write_results, dynamic_write_results
-from SPPE.src.main_fast_inference import *
-
-from SPPE.src.utils.img import im_to_torch
-import os
-import sys
-from tqdm import tqdm
-import time
-from fn import getTime
-import cv2
-
-from pPose_nms import write_json
-
-args = opt
-args.dataset = 'coco'
-
-
-def loop():
- n = 0
- while True:
- yield n
- n += 1
-
-
-if __name__ == "__main__":
- webcam = args.webcam
- mode = args.mode
- if not os.path.exists(args.outputpath):
- os.mkdir(args.outputpath)
-
- # Load input video
- fvs = WebcamLoader(webcam).start()
- (fourcc, fps, frameSize) = fvs.videoinfo()
- # Data writer
- save_path = os.path.join(args.outputpath, 'AlphaPose_webcam' + webcam + '.avi')
- writer = DataWriter(args.save_video, save_path, cv2.VideoWriter_fourcc(*'XVID'), fps, frameSize).start()
-
- # Load YOLO model
- print('Loading YOLO model..')
- sys.stdout.flush()
- det_model = Darknet("yolo/cfg/yolov3-spp.cfg")
- det_model.load_weights('models/yolo/yolov3-spp.weights')
- det_model.net_info['height'] = args.inp_dim
- det_inp_dim = int(det_model.net_info['height'])
- assert det_inp_dim % 32 == 0
- assert det_inp_dim > 32
- det_model
- det_model.eval()
-
- # Load pose model
- pose_dataset = Mscoco()
- if args.fast_inference:
- pose_model = InferenNet_fast(4 * 1 + 1, pose_dataset)
- else:
- pose_model = InferenNet(4 * 1 + 1, pose_dataset)
- pose_model
- pose_model.eval()
-
- runtime_profile = {
- 'ld': [],
- 'dt': [],
- 'dn': [],
- 'pt': [],
- 'pn': []
- }
-
- print('Starting webcam demo, press Ctrl + C to terminate...')
- sys.stdout.flush()
- im_names_desc = tqdm(loop())
- for i in im_names_desc:
- try:
- start_time = getTime()
-
- (img, orig_img, inp, im_dim_list) = fvs.read()
- ckpt_time, load_time = getTime(start_time)
- runtime_profile['ld'].append(load_time)
- with torch.no_grad():
- # Human Detection
- img = Variable(img)
- im_dim_list = im_dim_list
-
- prediction = det_model(img, CUDA=True)
- ckpt_time, det_time = getTime(ckpt_time)
- runtime_profile['dt'].append(det_time)
- # NMS process
- dets = dynamic_write_results(prediction, opt.confidence,
- opt.num_classes, nms=True, nms_conf=opt.nms_thesh)
- if isinstance(dets, int) or dets.shape[0] == 0:
- writer.save(None, None, None, None, None, orig_img, im_name=str(i) + '.jpg')
- continue
- im_dim_list = torch.index_select(im_dim_list, 0, dets[:, 0].long())
- scaling_factor = torch.min(det_inp_dim / im_dim_list, 1)[0].view(-1, 1)
-
- # coordinate transfer
- dets[:, [1, 3]] -= (det_inp_dim - scaling_factor * im_dim_list[:, 0].view(-1, 1)) / 2
- dets[:, [2, 4]] -= (det_inp_dim - scaling_factor * im_dim_list[:, 1].view(-1, 1)) / 2
-
- dets[:, 1:5] /= scaling_factor
- for j in range(dets.shape[0]):
- dets[j, [1, 3]] = torch.clamp(dets[j, [1, 3]], 0.0, im_dim_list[j, 0])
- dets[j, [2, 4]] = torch.clamp(dets[j, [2, 4]], 0.0, im_dim_list[j, 1])
- boxes = dets[:, 1:5].cpu()
- scores = dets[:, 5:6].cpu()
- ckpt_time, detNMS_time = getTime(ckpt_time)
- runtime_profile['dn'].append(detNMS_time)
- # Pose Estimation
- inps = torch.zeros(boxes.size(0), 3, opt.inputResH, opt.inputResW)
- pt1 = torch.zeros(boxes.size(0), 2)
- pt2 = torch.zeros(boxes.size(0), 2)
- inps, pt1, pt2 = crop_from_dets(inp, boxes, inps, pt1, pt2)
- inps = Variable(inps)
-
- hm = pose_model(inps)
- ckpt_time, pose_time = getTime(ckpt_time)
- runtime_profile['pt'].append(pose_time)
-
- writer.save(boxes, scores, hm.cpu(), pt1, pt2, orig_img, im_name=str(i) + '.jpg')
-
- ckpt_time, post_time = getTime(ckpt_time)
- runtime_profile['pn'].append(post_time)
-
- # TQDM
- im_names_desc.set_description(
- 'load time: {ld:.4f} | det time: {dt:.4f} | det NMS: {dn:.4f} | pose time: {pt:.4f} | post process: {pn:.4f}'.format(
- ld=np.mean(runtime_profile['ld']), dt=np.mean(runtime_profile['dt']), dn=np.mean(runtime_profile['dn']),
- pt=np.mean(runtime_profile['pt']), pn=np.mean(runtime_profile['pn']))
- )
- except KeyboardInterrupt:
- break
-
- print(' ')
- print('===========================> Finish Model Running.')
- if (args.save_img or args.save_video) and not args.vis_fast:
- print('===========================> Rendering remaining images in the queue...')
- print('===========================> If this step takes too long, you can enable the --vis_fast flag to use fast rendering (real-time).')
- while writer.running():
- pass
- writer.stop()
- final_result = writer.results()
- write_json(final_result, args.outputpath)
diff --git a/spaces/Shine1916/MyChat/app.py b/spaces/Shine1916/MyChat/app.py
deleted file mode 100644
index 9a032b9c225c4d4356d142159774ce6d36eba2fc..0000000000000000000000000000000000000000
--- a/spaces/Shine1916/MyChat/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import os
-import openai
-import gradio as gr
-
-#if you have OpenAI API key as an environment variable, enable the below
-#openai.api_key = os.getenv("OPENAI_API_KEY")
-
-#if you have OpenAI API key as a string, enable the below
-openai.api_key = "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"
-
-start_sequence = "\nAI:"
-restart_sequence = "\nHuman: "
-
-prompt = "请输入你的问题,我会尽力为你解答!"
-
-def openai_create(prompt):
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt,
- temperature=0.9,
- max_tokens=2048,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0.6,
- stop=[" Human:", " AI:"]
- )
-
- return response.choices[0].text
-
-
-
-def chatgpt_clone(input, history):
- history = history or []
- s = list(sum(history, ()))
- s.append(input)
- inp = ' '.join(s)
- output = openai_create(inp)
- history.append((input, output))
- return history, history
-
-
-block = gr.Blocks()
-
-
-with block:
- gr.Markdown("""
-
-Xforce keygen autodesk inventor 2013 64 bit free. ... Listen to Inventor Engineer-to-Order 2013 64 Bit Adlmint.dll Crack Download and 167 more episodes by ... Inventor 2013 Xforce Keygen 64bits Mega. hapdenze (Applicant). 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Benaam Rishte Movie In Hindi Hd Download Utorrent Movies [PATCHED].md b/spaces/bioriAsaeru/text-to-voice/Benaam Rishte Movie In Hindi Hd Download Utorrent Movies [PATCHED].md
deleted file mode 100644
index 36908d721af7b979a4ed1cbdc9f934a36aaeba05..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Benaam Rishte Movie In Hindi Hd Download Utorrent Movies [PATCHED].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Benaam Rishte movie in hindi hd download utorrent movies
Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R is a powerful and easy-to-use compressor plugin that can emulate the sound of a legendary hardware unit. It has many features and benefits that make it a valuable tool for any audio enthusiast or professional.
-
If you are looking for a high-quality compressor plugin that can provide you with smooth and transparent compression sound, you should definitely give Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R a try.
How to use Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R?
-
Using Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R is very easy and intuitive. You can simply load the plugin on your audio track or bus and start tweaking the parameters to achieve the desired compression sound. You can also use the presets that are included in the plugin to get some inspiration or to quickly find a suitable setting for your material.
-
Some of the tips and tricks that you can use to get the most out of Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R are:
-
-
Use the Soft mode for gentle and transparent compression: The Soft mode is ideal for situations where you want to apply some subtle compression without affecting the natural dynamics and transients of your audio material. The Soft mode starts at 1:1 ratio and increases with input level up to 8:1, providing a smooth and gradual compression effect.
-
Use the Brick mode for limiting and peak reduction: The Brick mode is ideal for situations where you want to limit or reduce the peaks of your audio material without introducing distortion or artifacts. The Brick mode acts as an analog limiter and cuts off signal peaks at the set threshold, providing a clean and consistent output level.
-
Use the sidechain filter to control the compression frequency range: The sidechain filter allows you to adjust the frequency range that affects the compression behavior. You can choose from 60 Hz or 90 Hz positions, which will largely ignore those frequencies by the compressor, or you can turn off the filter for a full-range compression.
-
Use the mix parameter to blend the dry and wet signals: The mix parameter allows you to adjust the balance between the dry (unprocessed) and wet (processed) signals. You can use this parameter to create parallel compression effects or to fine-tune the amount of compression applied to your audio material.
-
-
What are the pros and cons of Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R?
-
Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R is a great compressor plugin that can offer many advantages and benefits to your audio production. However, it also has some drawbacks and limitations that you should be aware of. Here are some of the pros and cons of Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R:
-
-
Pros
Cons
-
- High-quality sound that emulates a legendary hardware unit
- Expensive price compared to some other compressor plugins
-
- Versatile and flexible features that can suit different situations and styles
- Illegal and unethical to use the cracked version of the plugin
-
- Simple and intuitive user interface that is easy to use and customize
- Potential compatibility issues with some antivirus software or operating systems
-
- Presets included that can help you find a suitable setting quickly
- No demo version available to try before you buy
-
-
How to compare Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R with other compressor plugins?
-
Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R is not the only compressor plugin that can emulate the sound of a hardware unit. There are many other compressor plugins that can offer similar or different features and benefits to your audio production. However, not all compressor plugins are created equal and some may suit your needs better than others.
-
Some of the factors that you can use to compare Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R with other compressor plugins are:
-
-
Price: How much does the plugin cost and what value does it offer for your money? Is it worth investing in a premium plugin or can you get a similar result with a cheaper or free plugin?
-
Sound quality: How well does the plugin reproduce the sound and behavior of the hardware unit? Does it sound authentic, natural, and transparent or does it introduce unwanted artifacts, noise, or distortion?
-
Features and flexibility: How many features and options does the plugin offer and how easy are they to use and customize? Does the plugin provide enough control and versatility to suit different situations and styles or is it too limited or complex?
-
User interface and usability: How user-friendly and intuitive is the plugin interface and how well does it integrate with your digital audio workstation? Does the plugin provide clear feedback and visual indicators or is it confusing and cluttered?
-
Support and updates: How reliable and stable is the plugin and how often does it receive updates and improvements? Does the plugin developer provide good customer service and technical support or is it hard to reach and communicate with them?
-
-
What are some of the best alternatives to Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R?
-
If you are looking for some of the best alternatives to Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R, you might want to check out some of these compressor plugins that can also emulate the sound of a hardware unit:
-
-
-
FabFilter Pro-C 2: This is a versatile and powerful compressor plugin that can handle any kind of compression task with ease. It offers eight different compression styles, ranging from clean and transparent to warm and punchy, as well as advanced features such as sidechain EQ, oversampling, lookahead, mid/side processing, external sidechain input, and more.
-
Slate Digital FG-Grey: This is a faithful emulation of the legendary SSL G-Series bus compressor that can add glue, punch, and cohesion to your mixes. It offers a simple but effective interface with four ratio settings, threshold, attack, release, make-up gain, auto release, high-pass filter, mix knob, and VU meter.
-
Softube Tube-Tech CL 1B Mk II: This is a modernized version of the classic Tube-Tech CL 1B optical compressor that can deliver smooth and musical compression with a warm tube sound. It offers a redesigned interface with improved sound quality, lower CPU usage, external sidechain input, parallel compression option, dry/wet knob, mid/side mode, saturation control, and more.
-
-
How to get the best results with Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R?
-
Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R is a powerful compressor plugin that can enhance your audio production in many ways. However, like any other plugin, it requires some knowledge and skill to use it effectively and efficiently. Here are some tips and best practices that can help you get the best results with Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R:
-
-
Use it on the right sources: Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R can work well on a variety of audio sources, such as drums, vocals, guitars, bass, synths, and more. However, it may not be suitable for every source or every situation. For example, it may not be the best choice for very dynamic or transient-rich sources that need more control and precision, or for very delicate or subtle sources that need more transparency and clarity.
-
Use it sparingly and tastefully: Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R can add a lot of character and warmth to your sound, but it can also make it sound dull and lifeless if you overdo it. It is important to use it sparingly and tastefully, and to avoid applying too much compression or too high ratios that can squash your dynamics and transients. A little compression can go a long way.
-
Use it in context and with reference: Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R can sound great on its own, but it may not sound so great in the context of your mix or your genre. It is important to use it in context and with reference, and to compare it with other compressor plugins or hardware units that can achieve similar or different results. You may find that you need to adjust your settings or use a different plugin depending on your mix or your genre.
-
Use it creatively and experimentally: Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R can also be used creatively and experimentally to create some interesting and unique effects on your sound. You can use it to create parallel compression effects by using the mix knob, to create sidechain compression effects by using the external sidechain input, to create saturation effects by using the analog emulation feature, or to create any other effects that you can imagine by using the different modes and parameters.
-
-
Where to learn more about Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R?
-
If you want to learn more about Vertigo.Sound.VSC-2.v1.1.2.x86.x64-R2R, you can visit some of these resources that can provide you with more information and tutorials about this plugin:
-
-
The official website of the plugin: This is where you can find the most accurate and updated information about the plugin, such as its features, specifications, requirements, price, license, support, and more.
-
The official manual of the plugin: This is where you can find the most detailed and comprehensive information about the plugin, such as its installation, activation, interface, functions, parameters, presets, tips, tricks, and more.
-
The official video tutorials of the plugin: This is where you can find some video tutorials that can show you how to use the plugin in different situations and styles, such as mastering, mixing, recording, etc.
-
The online reviews and articles of the plugin: This is where you can find some online reviews and articles that can give you some opinions and insights about the plugin from different users and experts.
-
The online forums and communities of the plugin: This is where you can find some online forums and communities that can provide you with some feedback and support from other users and developers of the plugin.
-
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Free Download Windows Xp Home Edition Ulcpc 125.md b/spaces/bioriAsaeru/text-to-voice/Free Download Windows Xp Home Edition Ulcpc 125.md
deleted file mode 100644
index e79ee8f5604f250720264a29c6d9418e69b5109c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Free Download Windows Xp Home Edition Ulcpc 125.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/blueslmj/anime-remove-background/app.py b/spaces/blueslmj/anime-remove-background/app.py
deleted file mode 100644
index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000
--- a/spaces/blueslmj/anime-remove-background/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import huggingface_hub
-import onnxruntime as rt
-import numpy as np
-import cv2
-
-
-def get_mask(img, s=1024):
- img = (img / 255).astype(np.float32)
- h, w = h0, w0 = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- mask = rmbg_model.run(None, {'img': img_input})[0][0]
- mask = np.transpose(mask, (1, 2, 0))
- mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
- mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
- return mask
-
-
-def rmbg_fn(img):
- mask = get_mask(img)
- img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
- mask = (mask * 255).astype(np.uint8)
- img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
- mask = mask.repeat(3, axis=2)
- return mask, img
-
-
-if __name__ == "__main__":
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
- rmbg_model = rt.InferenceSession(model_path, providers=providers)
- app = gr.Blocks()
- with app:
- gr.Markdown("# Anime Remove Background\n\n"
- "\n\n"
- "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
- with gr.Row():
- with gr.Column():
- input_img = gr.Image(label="input image")
- examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
- examples = gr.Dataset(components=[input_img], samples=examples_data)
- run_btn = gr.Button(variant="primary")
- output_mask = gr.Image(label="mask")
- output_img = gr.Image(label="result", image_mode="RGBA")
- examples.click(lambda x: x[0], [examples], [input_img])
- run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
- app.launch()
diff --git a/spaces/bobu5/SD-webui-controlnet-docker/on_start.sh b/spaces/bobu5/SD-webui-controlnet-docker/on_start.sh
deleted file mode 100644
index c083aa9e035a19168d9409785385bc21e8597c58..0000000000000000000000000000000000000000
--- a/spaces/bobu5/SD-webui-controlnet-docker/on_start.sh
+++ /dev/null
@@ -1,149 +0,0 @@
-#!/bin/bash
-set -euo pipefail
-
-function download-model() {
- local _option=$1
- local _filename=$2
- local _url=$3
- local _dir
-
- ! [ $# -eq 3 ] && (echo "usage: "; for o in checkpoint lora vae control-net embedding; do echo " \$ download-model --$o "; done) || true
- [ $# -eq 0 ] && return 0 || ! [ $# -eq 3 ] && (echo ""; echo "error - invalid number of arguments (expected 3, received $#)"; echo -n "\$ download-model $1"; (for arg in "${@: 2}"; do echo -n " \"${arg//\"/\\\"}\""; done) && echo "") && return 1 || true
-
- case ${_option,,} in
- --checkpoint) _dir="/app/stable-diffusion-webui/models/Stable-diffusion";;
- --lora) _dir="/app/stable-diffusion-webui/extensions/sd-webui-additional-networks/models/LoRA";;
- --vae) _dir="/app/stable-diffusion-webui/models/VAE";;
- --control-net) _dir="/app/stable-diffusion-webui/models/ControlNet";;
- --embedding) _dir="/app/stable-diffusion-webui/embeddings";;
-
- *) echo "error - unknown first argument: '$1' (valid options are --checkpoint, --lora, --vae, --control-net or --embedding):"; echo "\$ download-model $1 \"$2\" \"$3\""; return 1;;
- esac
-
- echo "\$ download-model $_option \"$2\" \"$3\"" ; echo ""
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $_url -d $_dir -o $_filename && echo ""
-}
-
-## ----------------------------
-
-## Adds a header to the webui on Hugging Face Spaces.
-sed -i -e '/demo:/r /app/stable-diffusion-webui/header_patch.py' /app/stable-diffusion-webui/modules/ui.py
-
-## ----------------------------
-
-## Installing less models if $IS_SHARED_UI environment variable is set.
-if [ ${IS_SHARED_UI:-0} != 0 ]; then
- download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-5-pruned-emaonly.safetensors"
- download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml"
- download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml"
- download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors"
- download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors"
- download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors"
- download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors"
- download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors"
- download-model --checkpoint "AtoZovyaRPGArtistTools15_sd15V1.safetensors" "https://civitai.com/api/download/models/10185"
- download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt"
- sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /app/stable-diffusion-webui/modules/ui.py
- sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /app/stable-diffusion-webui/modules/ui.py
- sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /app/stable-diffusion-webui/modules/ui.py
- sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /app/stable-diffusion-webui/modules/ui.py
- rm -rf /app/stable-diffusion-webui/scripts /app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui /app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser /app/stable-diffusion-webui/extensions/sd-civitai-browser /app/stable-diffusion-webui/extensions/sd-webui-additional-networks
- cp -f shared-config.json config.json
- cp -f shared-ui-config.json ui-config.json
- exit 0
-fi
-## End of lightweight installation for $IS_SHARED_UI setup.
-
-## ----------------------------
-## env $IS_SHARED_UI is not set
-## ----------------------------
-
-## Stable Diffusion 2.1 · 768 base model:
-#download-model --checkpoint "v2-1_768-ema-pruned.safetensors" "https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/36a01dc742066de2e8c91e7cf0b8f6b53ef53da1/v2-1_768-ema-pruned.safetensors"
-#download-model --checkpoint "v2-1_768-ema-pruned.yaml" "https://raw.githubusercontent.com/Stability-AI/stablediffusion/fc1488421a2761937b9d54784194157882cbc3b1/configs/stable-diffusion/v2-inference-v.yaml"
-
-## Stable Diffusion 1.5 · 512 base model:
-#download-model --checkpoint "v1-5-pruned-emaonly.safetensors" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-5-pruned-emaonly.safetensors"
-#download-model --checkpoint "v1-5-pruned-emaonly.yaml" "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/39593d5650112b4cc580433f6b0435385882d819/v1-inference.yaml"
-
-## Stable Diffusion Deliberate
-#download-model --checkpoint "deliberate_v11.safetensors" https://huggingface.co/Electricatom369/model1/blob/main/deliberate_v11.safetensors
-## ----------------------------
-
-## LoRA (low-rank adaptation) · epi_noiseoffset v2:
-download-model --lora "epiNoiseoffset_v2.safetensors" "https://civitai.com/api/download/models/16576?type=Model&format=SafeTensor"
-
-## ----------------------------
-
-## VAE (variational autoencoder) · VAE 840k EMA:
-download-model --vae "vae-ft-mse-840000-ema-pruned.safetensors" "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/629b3ad3030ce36e15e70c5db7d91df0d60c627f/vae-ft-mse-840000-ema-pruned.safetensors"
-
-## ----------------------------
-
-## ControlNet · Pre-extracted models:
-download-model --control-net "cldm_v15.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v15.yaml"
-download-model --control-net "cldm_v21.yaml" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/cldm_v21.yaml"
-download-model --control-net "control_canny-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_canny-fp16.safetensors"
-download-model --control-net "control_depth-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_depth-fp16.safetensors"
-download-model --control-net "control_hed-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_hed-fp16.safetensors"
-download-model --control-net "control_normal-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_normal-fp16.safetensors"
-download-model --control-net "control_openpose-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_openpose-fp16.safetensors"
-download-model --control-net "control_scribble-fp16.safetensors" "https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/87c3affbcad3baec52ffe39cac3a15a94902aed3/control_scribble-fp16.safetensors"
-
-## ----------------------------
-
-## Embedding · bad_prompt_version2
-download-model --embedding "bad_prompt_version2.pt" "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/72fd9d6011c2ba87b5847b7e45e6603917e3cbed/bad_prompt_version2.pt"
-
-## ----------------------------
-
-## Checkpoint · The Ally's Mix III: Revolutions:
-#download-model --checkpoint "theAllysMixIII_v10.safetensors" "https://civitai.com/api/download/models/12763?type=Model&format=SafeTensor"
-
-## Checkpoint · Dreamlike Diffusion 1.0:
-# download-model --checkpoint "dreamlike-diffusion-1.0.safetensors" "https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/00cbe4d56fd56f45e952a5be4d847f21b9782546/dreamlike-diffusion-1.0.safetensors"
-
-## Stable Diffusion Deliberate
-#download-model --checkpoint "deliberate_v11.safetensors" https://huggingface.co/Electricatom369/model1/blob/main/deliberate_v11.safetensors
-
-## Checkpoint · Dreamshaper 3.31:
-# download-model --checkpoint "DreamShaper_3.31_baked_vae-inpainting.inpainting.safetensors" "https://huggingface.co/Lykon/DreamShaper/resolve/d227e39aab5e360aec6401be916025ddfc8127bd/DreamShaper_3.31_baked_vae-inpainting.inpainting.safetensors"
-
-## Checkpoint · dalcefo_painting:
-# download-model --checkpoint "dalcefoPainting_2nd.safetensors" "https://civitai.com/api/download/models/14675?type=Pruned%20Model&format=SafeTensor"
-
-## Checkpoint · Deliberate v2:
-# download-model --checkpoint "deliberate_v2.safetensors" "https://civitai.com/api/download/models/15236?type=Model&format=SafeTensor"
-
-## Checkpoint · RPG v4:
-# download-model --checkpoint "RPG-v4.safetensors" "https://huggingface.co/Anashel/rpg/resolve/main/RPG-V4-Model-Download/RPG-v4.safetensors"
-
-## Checkpoint · A to Zovya RPG Artist's Tools (SD 1.5):
-# download-model --checkpoint "AtoZovyaRPGArtistTools15_sd15V1.safetensors" "https://civitai.com/api/download/models/10185"
-
-## Checkpoint · A to Zovya RPG Artist's Tools (SD 2.1):
-# download-model --checkpoint "AtoZovyaRPGArtistTools15_sd21768V1.safetensors" "https://civitai.com/api/download/models/9593?type=Model&format=SafeTensor"
-# download-model --checkpoint "aToZovyaRPGArtistsTools15_sd21768V1.yaml" "https://civitai.com/api/download/models/9593?type=Config&format=Other"
-
-## ----------------------------
-
-## Add additional models that you want to install on startup. Replace URL and FILENAME from the examples below with your values.
-
-## Usage:
-## download-model --checkpoint
-## download-model --lora
-## download-model --vae
-## download-model --control-net
-## download-model --embedding
-
-## ----------------------------
-
-## Checkpoint · Example:
-# download-model --checkpoint "FILENAME" "URL"
-
-## LORA (low-rank adaptation) · Example:
-# download-model --lora "FILENAME" "URL"
-
-## VAE (variational autoencoder) · Example:
-# download-model --vae "FILENAME" "URL"
-download-model --checkpoint "anythingv4-5.ckpt" "https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.5.ckpt"
diff --git a/spaces/caliex/Comparison-of-Manifold-Learning-methods/app.py b/spaces/caliex/Comparison-of-Manifold-Learning-methods/app.py
deleted file mode 100644
index 777bce03e1cf6016ba0684dc9f4de66c2f729cf7..0000000000000000000000000000000000000000
--- a/spaces/caliex/Comparison-of-Manifold-Learning-methods/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import gradio as gr
-import matplotlib.pyplot as plt
-from matplotlib import ticker
-from sklearn import manifold, datasets
-from mpl_toolkits.mplot3d import Axes3D
-
-
-def compare_manifold_learning(methods, n_samples, n_neighbors, n_components, perplexity):
- S_points, S_color = datasets.make_s_curve(n_samples, random_state=0)
- transformed_data = []
-
- if len(methods) == 1:
- method = methods[0]
- manifold_method = {
- "Locally Linear Embeddings Standard": manifold.LocallyLinearEmbedding(method="standard", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings LTSA": manifold.LocallyLinearEmbedding(method="ltsa", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings Hessian": manifold.LocallyLinearEmbedding(method="hessian", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings Modified": manifold.LocallyLinearEmbedding(method="modified", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Isomap": manifold.Isomap(n_neighbors=n_neighbors, n_components=n_components, p=1),
- "MultiDimensional Scaling": manifold.MDS(n_components=n_components, max_iter=50, n_init=4, random_state=0, normalized_stress=False),
- "Spectral Embedding": manifold.SpectralEmbedding(n_components=n_components, n_neighbors=n_neighbors),
- "T-distributed Stochastic Neighbor Embedding": manifold.TSNE(n_components=n_components, perplexity=perplexity, init="random", n_iter=250, random_state=0)
- }[method]
- S_transformed = manifold_method.fit_transform(S_points)
- transformed_data.append(S_transformed)
- else:
- for method in methods:
- manifold_method = {
- "Locally Linear Embeddings Standard": manifold.LocallyLinearEmbedding(method="standard", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings LTSA": manifold.LocallyLinearEmbedding(method="ltsa", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings Hessian": manifold.LocallyLinearEmbedding(method="hessian", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Locally Linear Embeddings Modified": manifold.LocallyLinearEmbedding(method="modified", n_neighbors=n_neighbors, n_components=n_components, eigen_solver="auto", random_state=0),
- "Isomap": manifold.Isomap(n_neighbors=n_neighbors, n_components=n_components, p=1),
- "MultiDimensional Scaling": manifold.MDS(n_components=n_components, max_iter=50, n_init=4, random_state=0, normalized_stress=False),
- "Spectral Embedding": manifold.SpectralEmbedding(n_components=n_components, n_neighbors=n_neighbors),
- "T-distributed Stochastic Neighbor Embedding": manifold.TSNE(n_components=n_components, perplexity=perplexity, init="random", n_iter=250, random_state=0)
- }[method]
- S_transformed = manifold_method.fit_transform(S_points)
- transformed_data.append(S_transformed)
-
- fig, axs = plt.subplots(1, len(transformed_data), figsize=(6 * len(transformed_data), 6))
- fig.suptitle("Manifold Learning Comparison", fontsize=16)
-
- if len(methods) == 1:
- ax = axs
- method = methods[0]
- data = transformed_data[0]
- ax.scatter(data[:, 0], data[:, 1], c=S_color, cmap=plt.cm.Spectral)
- ax.set_title(f"Method: {method}")
- ax.axis("tight")
- ax.axis("off")
- ax.xaxis.set_major_locator(ticker.NullLocator())
- ax.yaxis.set_major_locator(ticker.NullLocator())
- else:
- for ax, method, data in zip(axs, methods, transformed_data):
- ax.scatter(data[:, 0], data[:, 1], c=S_color, cmap=plt.cm.Spectral)
- ax.set_title(f"Method: {method}")
- ax.axis("tight")
- ax.axis("off")
- ax.xaxis.set_major_locator(ticker.NullLocator())
- ax.yaxis.set_major_locator(ticker.NullLocator())
-
- plt.tight_layout()
- plt.savefig("plot.png")
- plt.close()
-
- return "plot.png"
-
-method_options = [
- "Locally Linear Embeddings Standard",
- "Locally Linear Embeddings LTSA",
- "Locally Linear Embeddings Hessian",
- "Locally Linear Embeddings Modified",
- "Isomap",
- "MultiDimensional Scaling",
- "Spectral Embedding",
- "T-distributed Stochastic Neighbor Embedding"
-]
-
-inputs = [
- gr.components.CheckboxGroup(method_options, label="Manifold Learning Methods"),
- gr.inputs.Slider(default=1500, label="Number of Samples", maximum=5000),
- gr.inputs.Slider(default=12, label="Number of Neighbors"),
- gr.inputs.Slider(default=2, label="Number of Components"),
- gr.inputs.Slider(default=30, label="Perplexity (for t-SNE)")
-]
-
-gr.Interface(
- fn=compare_manifold_learning,
- inputs=inputs,
- outputs="image",
- examples=[
- [method_options, 1500, 12, 2, 30]
- ],
- title="Manifold Learning Comparison",
- description="This code demonstrates a comparison of manifold learning methods using the S-curve dataset. Manifold learning techniques aim to uncover the underlying structure and relationships within high-dimensional data by projecting it onto a lower-dimensional space. This comparison allows you to explore the effects of different methods on the dataset. See the original scikit-learn example here: https://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html"
-).launch()
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/fpn.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/fpn.py
deleted file mode 100644
index 19d24e13f069ecb389edcdb4d9859506fe9e6f76..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/fpn.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import math
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-
-from .backbone import Backbone
-from .build import BACKBONE_REGISTRY
-from .resnet import build_resnet_backbone
-
-__all__ = ["build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"]
-
-
-class FPN(Backbone):
- """
- This module implements :paper:`FPN`.
- It creates pyramid features built on top of some input feature maps.
- """
-
- _fuse_type: torch.jit.Final[str]
-
- def __init__(
- self,
- bottom_up,
- in_features,
- out_channels,
- norm="",
- top_block=None,
- fuse_type="sum",
- square_pad=0,
- ):
- """
- Args:
- bottom_up (Backbone): module representing the bottom up subnetwork.
- Must be a subclass of :class:`Backbone`. The multi-scale feature
- maps generated by the bottom up network, and listed in `in_features`,
- are used to generate FPN levels.
- in_features (list[str]): names of the input feature maps coming
- from the backbone to which FPN is attached. For example, if the
- backbone produces ["res2", "res3", "res4"], any *contiguous* sublist
- of these may be used; order must be from high to low resolution.
- out_channels (int): number of channels in the output feature maps.
- norm (str): the normalization to use.
- top_block (nn.Module or None): if provided, an extra operation will
- be performed on the output of the last (smallest resolution)
- FPN output, and the result will extend the result list. The top_block
- further downsamples the feature map. It must have an attribute
- "num_levels", meaning the number of extra FPN levels added by
- this block, and "in_feature", which is a string representing
- its input feature (e.g., p5).
- fuse_type (str): types for fusing the top down features and the lateral
- ones. It can be "sum" (default), which sums up element-wise; or "avg",
- which takes the element-wise mean of the two.
- square_pad (int): If > 0, require input images to be padded to specific square size.
- """
- super(FPN, self).__init__()
- assert isinstance(bottom_up, Backbone)
- assert in_features, in_features
-
- # Feature map strides and channels from the bottom up network (e.g. ResNet)
- input_shapes = bottom_up.output_shape()
- strides = [input_shapes[f].stride for f in in_features]
- in_channels_per_feature = [input_shapes[f].channels for f in in_features]
-
- _assert_strides_are_log2_contiguous(strides)
- lateral_convs = []
- output_convs = []
-
- use_bias = norm == ""
- for idx, in_channels in enumerate(in_channels_per_feature):
- lateral_norm = get_norm(norm, out_channels)
- output_norm = get_norm(norm, out_channels)
-
- lateral_conv = Conv2d(
- in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm
- )
- output_conv = Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=use_bias,
- norm=output_norm,
- )
- weight_init.c2_xavier_fill(lateral_conv)
- weight_init.c2_xavier_fill(output_conv)
- stage = int(math.log2(strides[idx]))
- self.add_module("fpn_lateral{}".format(stage), lateral_conv)
- self.add_module("fpn_output{}".format(stage), output_conv)
-
- lateral_convs.append(lateral_conv)
- output_convs.append(output_conv)
- # Place convs into top-down order (from low to high resolution)
- # to make the top-down computation in forward clearer.
- self.lateral_convs = lateral_convs[::-1]
- self.output_convs = output_convs[::-1]
- self.top_block = top_block
- self.in_features = tuple(in_features)
- self.bottom_up = bottom_up
- # Return feature names are "p", like ["p2", "p3", ..., "p6"]
- self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides}
- # top block output feature maps.
- if self.top_block is not None:
- for s in range(stage, stage + self.top_block.num_levels):
- self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1)
-
- self._out_features = list(self._out_feature_strides.keys())
- self._out_feature_channels = {k: out_channels for k in self._out_features}
- self._size_divisibility = strides[-1]
- self._square_pad = square_pad
- assert fuse_type in {"avg", "sum"}
- self._fuse_type = fuse_type
-
- @property
- def size_divisibility(self):
- return self._size_divisibility
-
- @property
- def padding_constraints(self):
- return {"square_size": self._square_pad}
-
- def forward(self, x):
- """
- Args:
- input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to
- feature map tensor for each feature level in high to low resolution order.
-
- Returns:
- dict[str->Tensor]:
- mapping from feature map name to FPN feature map tensor
- in high to low resolution order. Returned feature names follow the FPN
- paper convention: "p", where stage has stride = 2 ** stage e.g.,
- ["p2", "p3", ..., "p6"].
- """
- bottom_up_features = self.bottom_up(x)
- results = []
- prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]])
- results.append(self.output_convs[0](prev_features))
-
- # Reverse feature maps into top-down order (from low to high resolution)
- for idx, (lateral_conv, output_conv) in enumerate(
- zip(self.lateral_convs, self.output_convs)
- ):
- # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336
- # Therefore we loop over all modules but skip the first one
- if idx > 0:
- features = self.in_features[-idx - 1]
- features = bottom_up_features[features]
- top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest")
- lateral_features = lateral_conv(features)
- prev_features = lateral_features + top_down_features
- if self._fuse_type == "avg":
- prev_features /= 2
- results.insert(0, output_conv(prev_features))
-
- if self.top_block is not None:
- if self.top_block.in_feature in bottom_up_features:
- top_block_in_feature = bottom_up_features[self.top_block.in_feature]
- else:
- top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)]
- results.extend(self.top_block(top_block_in_feature))
- assert len(self._out_features) == len(results)
- return {f: res for f, res in zip(self._out_features, results)}
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
-
-def _assert_strides_are_log2_contiguous(strides):
- """
- Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2".
- """
- for i, stride in enumerate(strides[1:], 1):
- assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format(
- stride, strides[i - 1]
- )
-
-
-class LastLevelMaxPool(nn.Module):
- """
- This module is used in the original FPN to generate a downsampled
- P6 feature from P5.
- """
-
- def __init__(self):
- super().__init__()
- self.num_levels = 1
- self.in_feature = "p5"
-
- def forward(self, x):
- return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)]
-
-
-class LastLevelP6P7(nn.Module):
- """
- This module is used in RetinaNet to generate extra layers, P6 and P7 from
- C5 feature.
- """
-
- def __init__(self, in_channels, out_channels, in_feature="res5"):
- super().__init__()
- self.num_levels = 2
- self.in_feature = in_feature
- self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
- self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
- for module in [self.p6, self.p7]:
- weight_init.c2_xavier_fill(module)
-
- def forward(self, c5):
- p6 = self.p6(c5)
- p7 = self.p7(F.relu(p6))
- return [p6, p7]
-
-
-@BACKBONE_REGISTRY.register()
-def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelMaxPool(),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-
-@BACKBONE_REGISTRY.register()
-def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_resnet_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- in_channels_p6p7 = bottom_up.output_shape()["res5"].channels
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7(in_channels_p6p7, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/models.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/models.md
deleted file mode 100644
index a2def5c715ac793e6269cbb84ef4792f91a774c1..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/models.md
+++ /dev/null
@@ -1,180 +0,0 @@
-# Use Models
-
-## Build Models from Yacs Config
-From a yacs config object,
-models (and their sub-models) can be built by
-functions such as `build_model`, `build_backbone`, `build_roi_heads`:
-```python
-from detectron2.modeling import build_model
-model = build_model(cfg) # returns a torch.nn.Module
-```
-
-`build_model` only builds the model structure and fills it with random parameters.
-See below for how to load an existing checkpoint to the model and how to use the `model` object.
-
-### Load/Save a Checkpoint
-```python
-from detectron2.checkpoint import DetectionCheckpointer
-DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS
-
-checkpointer = DetectionCheckpointer(model, save_dir="output")
-checkpointer.save("model_999") # save to output/model_999.pth
-```
-
-Detectron2's checkpointer recognizes models in pytorch's `.pth` format, as well as the `.pkl` files
-in our model zoo.
-See [API doc](../modules/checkpoint.html#detectron2.checkpoint.DetectionCheckpointer)
-for more details about its usage.
-
-The model files can be arbitrarily manipulated using `torch.{load,save}` for `.pth` files or
-`pickle.{dump,load}` for `.pkl` files.
-
-### Use a Model
-
-A model can be called by `outputs = model(inputs)`, where `inputs` is a `list[dict]`.
-Each dict corresponds to one image and the required keys
-depend on the type of model, and whether the model is in training or evaluation mode.
-For example, in order to do inference,
-all existing models expect the "image" key, and optionally "height" and "width".
-The detailed format of inputs and outputs of existing models are explained below.
-
-__Training__: When in training mode, all models are required to be used under an `EventStorage`.
-The training statistics will be put into the storage:
-```python
-from detectron2.utils.events import EventStorage
-with EventStorage() as storage:
- losses = model(inputs)
-```
-
-__Inference__: If you only want to do simple inference using an existing model,
-[DefaultPredictor](../modules/engine.html#detectron2.engine.defaults.DefaultPredictor)
-is a wrapper around model that provides such basic functionality.
-It includes default behavior including model loading, preprocessing,
-and operates on single image rather than batches. See its documentation for usage.
-
-You can also run inference directly like this:
-```python
-model.eval()
-with torch.no_grad():
- outputs = model(inputs)
-```
-
-### Model Input Format
-
-Users can implement custom models that support any arbitrary input format.
-Here we describe the standard input format that all builtin models support in detectron2.
-They all take a `list[dict]` as the inputs. Each dict
-corresponds to information about one image.
-
-The dict may contain the following keys:
-
-* "image": `Tensor` in (C, H, W) format. The meaning of channels are defined by `cfg.INPUT.FORMAT`.
- Image normalization, if any, will be performed inside the model using
- `cfg.MODEL.PIXEL_{MEAN,STD}`.
-* "height", "width": the **desired** output height and width **in inference**, which is not necessarily the same
- as the height or width of the `image` field.
- For example, the `image` field contains the resized image, if resize is used as a preprocessing step.
- But you may want the outputs to be in **original** resolution.
- If provided, the model will produce output in this resolution,
- rather than in the resolution of the `image` as input into the model. This is more efficient and accurate.
-* "instances": an [Instances](../modules/structures.html#detectron2.structures.Instances)
- object for training, with the following fields:
- + "gt_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each instance.
- + "gt_classes": `Tensor` of long type, a vector of N labels, in range [0, num_categories).
- + "gt_masks": a [PolygonMasks](../modules/structures.html#detectron2.structures.PolygonMasks)
- or [BitMasks](../modules/structures.html#detectron2.structures.BitMasks) object storing N masks, one for each instance.
- + "gt_keypoints": a [Keypoints](../modules/structures.html#detectron2.structures.Keypoints)
- object storing N keypoint sets, one for each instance.
-* "sem_seg": `Tensor[int]` in (H, W) format. The semantic segmentation ground truth for training.
- Values represent category labels starting from 0.
-* "proposals": an [Instances](../modules/structures.html#detectron2.structures.Instances)
- object used only in Fast R-CNN style models, with the following fields:
- + "proposal_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing P proposal boxes.
- + "objectness_logits": `Tensor`, a vector of P scores, one for each proposal.
-
-For inference of builtin models, only "image" key is required, and "width/height" are optional.
-
-We currently don't define standard input format for panoptic segmentation training,
-because models now use custom formats produced by custom data loaders.
-
-#### How it connects to data loader:
-
-The output of the default [DatasetMapper]( ../modules/data.html#detectron2.data.DatasetMapper) is a dict
-that follows the above format.
-After the data loader performs batching, it becomes `list[dict]` which the builtin models support.
-
-
-### Model Output Format
-
-When in training mode, the builtin models output a `dict[str->ScalarTensor]` with all the losses.
-
-When in inference mode, the builtin models output a `list[dict]`, one dict for each image.
-Based on the tasks the model is doing, each dict may contain the following fields:
-
-* "instances": [Instances](../modules/structures.html#detectron2.structures.Instances)
- object with the following fields:
- * "pred_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each detected instance.
- * "scores": `Tensor`, a vector of N confidence scores.
- * "pred_classes": `Tensor`, a vector of N labels in range [0, num_categories).
- + "pred_masks": a `Tensor` of shape (N, H, W), masks for each detected instance.
- + "pred_keypoints": a `Tensor` of shape (N, num_keypoint, 3).
- Each row in the last dimension is (x, y, score). Confidence scores are larger than 0.
-* "sem_seg": `Tensor` of (num_categories, H, W), the semantic segmentation prediction.
-* "proposals": [Instances](../modules/structures.html#detectron2.structures.Instances)
- object with the following fields:
- * "proposal_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes)
- object storing N boxes.
- * "objectness_logits": a torch vector of N confidence scores.
-* "panoptic_seg": A tuple of `(pred: Tensor, segments_info: Optional[list[dict]])`.
- The `pred` tensor has shape (H, W), containing the segment id of each pixel.
-
- * If `segments_info` exists, each dict describes one segment id in `pred` and has the following fields:
-
- * "id": the segment id
- * "isthing": whether the segment is a thing or stuff
- * "category_id": the category id of this segment.
-
- If a pixel's id does not exist in `segments_info`, it is considered to be void label
- defined in [Panoptic Segmentation](https://arxiv.org/abs/1801.00868).
-
- * If `segments_info` is None, all pixel values in `pred` must be ≥ -1.
- Pixels with value -1 are assigned void labels.
- Otherwise, the category id of each pixel is obtained by
- `category_id = pixel // metadata.label_divisor`.
-
-
-### Partially execute a model:
-
-Sometimes you may want to obtain an intermediate tensor inside a model,
-such as the input of certain layer, the output before post-processing.
-Since there are typically hundreds of intermediate tensors, there isn't an API that provides you
-the intermediate result you need.
-You have the following options:
-
-1. Write a (sub)model. Following the [tutorial](./write-models.md), you can
- rewrite a model component (e.g. a head of a model), such that it
- does the same thing as the existing component, but returns the output
- you need.
-2. Partially execute a model. You can create the model as usual,
- but use custom code to execute it instead of its `forward()`. For example,
- the following code obtains mask features before mask head.
-
- ```python
- images = ImageList.from_tensors(...) # preprocessed input tensor
- model = build_model(cfg)
- model.eval()
- features = model.backbone(images.tensor)
- proposals, _ = model.proposal_generator(images, features)
- instances, _ = model.roi_heads(images, features, proposals)
- mask_features = [features[f] for f in model.roi_heads.in_features]
- mask_features = model.roi_heads.mask_pooler(mask_features, [x.pred_boxes for x in instances])
- ```
-
-3. Use [forward hooks](https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks).
- Forward hooks can help you obtain inputs or outputs of a certain module.
- If they are not exactly what you want, they can at least be used together with partial execution
- to obtain other tensors.
-
-All options require you to read documentation and sometimes code
-of the existing models to understand the internal logic,
-in order to write code to obtain the internal tensors.
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointRend/point_rend/mask_head.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointRend/point_rend/mask_head.py
deleted file mode 100644
index 46dd64721578bd45eb208206bbd5e7908cb6a148..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointRend/point_rend/mask_head.py
+++ /dev/null
@@ -1,435 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import math
-import numpy as np
-from typing import Dict, List, Tuple
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import Tensor, nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, cat, interpolate
-from detectron2.modeling import ROI_MASK_HEAD_REGISTRY
-from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference, mask_rcnn_loss
-from detectron2.structures import Boxes
-
-from .point_features import (
- generate_regular_grid_point_coords,
- get_point_coords_wrt_image,
- get_uncertain_point_coords_on_grid,
- get_uncertain_point_coords_with_randomness,
- point_sample,
- point_sample_fine_grained_features,
- sample_point_labels,
-)
-from .point_head import build_point_head, roi_mask_point_loss
-
-
-def calculate_uncertainty(logits, classes):
- """
- We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the
- foreground class in `classes`.
- Args:
- logits (Tensor): A tensor of shape (R, C, ...) or (R, 1, ...) for class-specific or
- class-agnostic, where R is the total number of predicted masks in all images and C is
- the number of foreground classes. The values are logits.
- classes (list): A list of length R that contains either predicted of ground truth class
- for eash predicted mask.
- Returns:
- scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with
- the most uncertain locations having the highest uncertainty score.
- """
- if logits.shape[1] == 1:
- gt_class_logits = logits.clone()
- else:
- gt_class_logits = logits[
- torch.arange(logits.shape[0], device=logits.device), classes
- ].unsqueeze(1)
- return -(torch.abs(gt_class_logits))
-
-
-class ConvFCHead(nn.Module):
- """
- A mask head with fully connected layers. Given pooled features it first reduces channels and
- spatial dimensions with conv layers and then uses FC layers to predict coarse masks analogously
- to the standard box head.
- """
-
- _version = 2
-
- @configurable
- def __init__(
- self, input_shape: ShapeSpec, *, conv_dim: int, fc_dims: List[int], output_shape: Tuple[int]
- ):
- """
- Args:
- conv_dim: the output dimension of the conv layers
- fc_dims: a list of N>0 integers representing the output dimensions of N FC layers
- output_shape: shape of the output mask prediction
- """
- super().__init__()
-
- # fmt: off
- input_channels = input_shape.channels
- input_h = input_shape.height
- input_w = input_shape.width
- self.output_shape = output_shape
- # fmt: on
-
- self.conv_layers = []
- if input_channels > conv_dim:
- self.reduce_channel_dim_conv = Conv2d(
- input_channels,
- conv_dim,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=True,
- activation=F.relu,
- )
- self.conv_layers.append(self.reduce_channel_dim_conv)
-
- self.reduce_spatial_dim_conv = Conv2d(
- conv_dim, conv_dim, kernel_size=2, stride=2, padding=0, bias=True, activation=F.relu
- )
- self.conv_layers.append(self.reduce_spatial_dim_conv)
-
- input_dim = conv_dim * input_h * input_w
- input_dim //= 4
-
- self.fcs = []
- for k, fc_dim in enumerate(fc_dims):
- fc = nn.Linear(input_dim, fc_dim)
- self.add_module("fc{}".format(k + 1), fc)
- self.fcs.append(fc)
- input_dim = fc_dim
-
- output_dim = int(np.prod(self.output_shape))
-
- self.prediction = nn.Linear(fc_dims[-1], output_dim)
- # use normal distribution initialization for mask prediction layer
- nn.init.normal_(self.prediction.weight, std=0.001)
- nn.init.constant_(self.prediction.bias, 0)
-
- for layer in self.conv_layers:
- weight_init.c2_msra_fill(layer)
- for layer in self.fcs:
- weight_init.c2_xavier_fill(layer)
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- output_shape = (
- cfg.MODEL.ROI_HEADS.NUM_CLASSES,
- cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION,
- cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION,
- )
- fc_dim = cfg.MODEL.ROI_MASK_HEAD.FC_DIM
- num_fc = cfg.MODEL.ROI_MASK_HEAD.NUM_FC
- ret = dict(
- input_shape=input_shape,
- conv_dim=cfg.MODEL.ROI_MASK_HEAD.CONV_DIM,
- fc_dims=[fc_dim] * num_fc,
- output_shape=output_shape,
- )
- return ret
-
- def forward(self, x):
- N = x.shape[0]
- for layer in self.conv_layers:
- x = layer(x)
- x = torch.flatten(x, start_dim=1)
- for layer in self.fcs:
- x = F.relu(layer(x))
- output_shape = [N] + list(self.output_shape)
- return self.prediction(x).view(*output_shape)
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- version = local_metadata.get("version", None)
-
- if version is None or version < 2:
- logger = logging.getLogger(__name__)
- logger.warning(
- "Weight format of PointRend models have changed! "
- "Applying automatic conversion now ..."
- )
- for k in list(state_dict.keys()):
- newk = k
- if k.startswith(prefix + "coarse_mask_fc"):
- newk = k.replace(prefix + "coarse_mask_fc", prefix + "fc")
- if newk != k:
- state_dict[newk] = state_dict[k]
- del state_dict[k]
-
-
-@ROI_MASK_HEAD_REGISTRY.register()
-class PointRendMaskHead(nn.Module):
- def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]):
- super().__init__()
- self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()}
- # point head
- self._init_point_head(cfg, input_shape)
- # coarse mask head
- self.roi_pooler_in_features = cfg.MODEL.ROI_MASK_HEAD.IN_FEATURES
- self.roi_pooler_size = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION
- self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()}
- in_channels = np.sum([input_shape[f].channels for f in self.roi_pooler_in_features])
- self._init_roi_head(
- cfg,
- ShapeSpec(
- channels=in_channels,
- width=self.roi_pooler_size,
- height=self.roi_pooler_size,
- ),
- )
-
- def _init_roi_head(self, cfg, input_shape):
- self.coarse_head = ConvFCHead(cfg, input_shape)
-
- def _init_point_head(self, cfg, input_shape):
- # fmt: off
- self.mask_point_on = cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON
- if not self.mask_point_on:
- return
- assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES
- self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES
- self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS
- self.mask_point_oversample_ratio = cfg.MODEL.POINT_HEAD.OVERSAMPLE_RATIO
- self.mask_point_importance_sample_ratio = cfg.MODEL.POINT_HEAD.IMPORTANCE_SAMPLE_RATIO
- # next three parameters are use in the adaptive subdivions inference procedure
- self.mask_point_subdivision_init_resolution = cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION
- self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS
- self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS
- # fmt: on
-
- in_channels = int(np.sum([input_shape[f].channels for f in self.mask_point_in_features]))
- self.point_head = build_point_head(cfg, ShapeSpec(channels=in_channels, width=1, height=1))
-
- # An optimization to skip unused subdivision steps: if after subdivision, all pixels on
- # the mask will be selected and recomputed anyway, we should just double our init_resolution
- while (
- 4 * self.mask_point_subdivision_init_resolution**2
- <= self.mask_point_subdivision_num_points
- ):
- self.mask_point_subdivision_init_resolution *= 2
- self.mask_point_subdivision_steps -= 1
-
- def forward(self, features, instances):
- """
- Args:
- features (dict[str, Tensor]): a dict of image-level features
- instances (list[Instances]): proposals in training; detected
- instances in inference
- """
- if self.training:
- proposal_boxes = [x.proposal_boxes for x in instances]
- coarse_mask = self.coarse_head(self._roi_pooler(features, proposal_boxes))
- losses = {"loss_mask": mask_rcnn_loss(coarse_mask, instances)}
- if not self.mask_point_on:
- return losses
-
- point_coords, point_labels = self._sample_train_points(coarse_mask, instances)
- point_fine_grained_features = self._point_pooler(features, proposal_boxes, point_coords)
- point_logits = self._get_point_logits(
- point_fine_grained_features, point_coords, coarse_mask
- )
- losses["loss_mask_point"] = roi_mask_point_loss(point_logits, instances, point_labels)
- return losses
- else:
- pred_boxes = [x.pred_boxes for x in instances]
- coarse_mask = self.coarse_head(self._roi_pooler(features, pred_boxes))
- return self._subdivision_inference(features, coarse_mask, instances)
-
- def _roi_pooler(self, features: List[Tensor], boxes: List[Boxes]):
- """
- Extract per-box feature. This is similar to RoIAlign(sampling_ratio=1) except:
- 1. It's implemented by point_sample
- 2. It pools features across all levels and concat them, while typically
- RoIAlign select one level for every box. However in the config we only use
- one level (p2) so there is no difference.
-
- Returns:
- Tensor of shape (R, C, pooler_size, pooler_size) where R is the total number of boxes
- """
- features_list = [features[k] for k in self.roi_pooler_in_features]
- features_scales = [self._feature_scales[k] for k in self.roi_pooler_in_features]
-
- num_boxes = sum(x.tensor.size(0) for x in boxes)
- output_size = self.roi_pooler_size
- point_coords = generate_regular_grid_point_coords(num_boxes, output_size, boxes[0].device)
- # For regular grids of points, this function is equivalent to `len(features_list)' calls
- # of `ROIAlign` (with `SAMPLING_RATIO=1`), and concat the results.
- roi_features, _ = point_sample_fine_grained_features(
- features_list, features_scales, boxes, point_coords
- )
- return roi_features.view(num_boxes, roi_features.shape[1], output_size, output_size)
-
- def _sample_train_points(self, coarse_mask, instances):
- assert self.training
- gt_classes = cat([x.gt_classes for x in instances])
- with torch.no_grad():
- # sample point_coords
- point_coords = get_uncertain_point_coords_with_randomness(
- coarse_mask,
- lambda logits: calculate_uncertainty(logits, gt_classes),
- self.mask_point_train_num_points,
- self.mask_point_oversample_ratio,
- self.mask_point_importance_sample_ratio,
- )
- # sample point_labels
- proposal_boxes = [x.proposal_boxes for x in instances]
- cat_boxes = Boxes.cat(proposal_boxes)
- point_coords_wrt_image = get_point_coords_wrt_image(cat_boxes.tensor, point_coords)
- point_labels = sample_point_labels(instances, point_coords_wrt_image)
- return point_coords, point_labels
-
- def _point_pooler(self, features, proposal_boxes, point_coords):
- point_features_list = [features[k] for k in self.mask_point_in_features]
- point_features_scales = [self._feature_scales[k] for k in self.mask_point_in_features]
- # sample image-level features
- point_fine_grained_features, _ = point_sample_fine_grained_features(
- point_features_list, point_features_scales, proposal_boxes, point_coords
- )
- return point_fine_grained_features
-
- def _get_point_logits(self, point_fine_grained_features, point_coords, coarse_mask):
- coarse_features = point_sample(coarse_mask, point_coords, align_corners=False)
- point_logits = self.point_head(point_fine_grained_features, coarse_features)
- return point_logits
-
- def _subdivision_inference(self, features, mask_representations, instances):
- assert not self.training
-
- pred_boxes = [x.pred_boxes for x in instances]
- pred_classes = cat([x.pred_classes for x in instances])
-
- mask_logits = None
- # +1 here to include an initial step to generate the coarsest mask
- # prediction with init_resolution, when mask_logits is None.
- # We compute initial mask by sampling on a regular grid. coarse_mask
- # can be used as initial mask as well, but it's typically very low-res
- # so it will be completely overwritten during subdivision anyway.
- for _ in range(self.mask_point_subdivision_steps + 1):
- if mask_logits is None:
- point_coords = generate_regular_grid_point_coords(
- pred_classes.size(0),
- self.mask_point_subdivision_init_resolution,
- pred_boxes[0].device,
- )
- else:
- mask_logits = interpolate(
- mask_logits, scale_factor=2, mode="bilinear", align_corners=False
- )
- uncertainty_map = calculate_uncertainty(mask_logits, pred_classes)
- point_indices, point_coords = get_uncertain_point_coords_on_grid(
- uncertainty_map, self.mask_point_subdivision_num_points
- )
-
- # Run the point head for every point in point_coords
- fine_grained_features = self._point_pooler(features, pred_boxes, point_coords)
- point_logits = self._get_point_logits(
- fine_grained_features, point_coords, mask_representations
- )
-
- if mask_logits is None:
- # Create initial mask_logits using point_logits on this regular grid
- R, C, _ = point_logits.shape
- mask_logits = point_logits.reshape(
- R,
- C,
- self.mask_point_subdivision_init_resolution,
- self.mask_point_subdivision_init_resolution,
- )
- # The subdivision code will fail with the empty list of boxes
- if len(pred_classes) == 0:
- mask_rcnn_inference(mask_logits, instances)
- return instances
- else:
- # Put point predictions to the right places on the upsampled grid.
- R, C, H, W = mask_logits.shape
- point_indices = point_indices.unsqueeze(1).expand(-1, C, -1)
- mask_logits = (
- mask_logits.reshape(R, C, H * W)
- .scatter_(2, point_indices, point_logits)
- .view(R, C, H, W)
- )
- mask_rcnn_inference(mask_logits, instances)
- return instances
-
-
-@ROI_MASK_HEAD_REGISTRY.register()
-class ImplicitPointRendMaskHead(PointRendMaskHead):
- def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]):
- super().__init__(cfg, input_shape)
-
- def _init_roi_head(self, cfg, input_shape):
- assert hasattr(self, "num_params"), "Please initialize point_head first!"
- self.parameter_head = ConvFCHead(cfg, input_shape, output_shape=(self.num_params,))
- self.regularizer = cfg.MODEL.IMPLICIT_POINTREND.PARAMS_L2_REGULARIZER
-
- def _init_point_head(self, cfg, input_shape):
- # fmt: off
- self.mask_point_on = True # always on
- assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES
- self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES
- self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS
- # next two parameters are use in the adaptive subdivions inference procedure
- self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS
- self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS
- # fmt: on
-
- in_channels = int(np.sum([input_shape[f].channels for f in self.mask_point_in_features]))
- self.point_head = build_point_head(cfg, ShapeSpec(channels=in_channels, width=1, height=1))
- self.num_params = self.point_head.num_params
-
- # inference parameters
- self.mask_point_subdivision_init_resolution = int(
- math.sqrt(self.mask_point_subdivision_num_points)
- )
- assert (
- self.mask_point_subdivision_init_resolution
- * self.mask_point_subdivision_init_resolution
- == self.mask_point_subdivision_num_points
- )
-
- def forward(self, features, instances):
- """
- Args:
- features (dict[str, Tensor]): a dict of image-level features
- instances (list[Instances]): proposals in training; detected
- instances in inference
- """
- if self.training:
- proposal_boxes = [x.proposal_boxes for x in instances]
- parameters = self.parameter_head(self._roi_pooler(features, proposal_boxes))
- losses = {"loss_l2": self.regularizer * (parameters**2).mean()}
-
- point_coords, point_labels = self._uniform_sample_train_points(instances)
- point_fine_grained_features = self._point_pooler(features, proposal_boxes, point_coords)
- point_logits = self._get_point_logits(
- point_fine_grained_features, point_coords, parameters
- )
- losses["loss_mask_point"] = roi_mask_point_loss(point_logits, instances, point_labels)
- return losses
- else:
- pred_boxes = [x.pred_boxes for x in instances]
- parameters = self.parameter_head(self._roi_pooler(features, pred_boxes))
- return self._subdivision_inference(features, parameters, instances)
-
- def _uniform_sample_train_points(self, instances):
- assert self.training
- proposal_boxes = [x.proposal_boxes for x in instances]
- cat_boxes = Boxes.cat(proposal_boxes)
- # uniform sample
- point_coords = torch.rand(
- len(cat_boxes), self.mask_point_train_num_points, 2, device=cat_boxes.tensor.device
- )
- # sample point_labels
- point_coords_wrt_image = get_point_coords_wrt_image(cat_boxes.tensor, point_coords)
- point_labels = sample_point_labels(instances, point_coords_wrt_image)
- return point_coords, point_labels
-
- def _get_point_logits(self, fine_grained_features, point_coords, parameters):
- return self.point_head(fine_grained_features, point_coords, parameters)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/README.md
deleted file mode 100644
index 4b7a90102d008a498e93dff595a09206be5269e7..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/README.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-# TridentNet in Detectron2
-**Scale-Aware Trident Networks for Object Detection**
-
-Yanghao Li\*, Yuntao Chen\*, Naiyan Wang, Zhaoxiang Zhang
-
-[[`TridentNet`](https://github.com/TuSimple/simpledet/tree/master/models/tridentnet)] [[`arXiv`](https://arxiv.org/abs/1901.01892)] [[`BibTeX`](#CitingTridentNet)]
-
-
-
-
-
-In this repository, we implement TridentNet-Fast in Detectron2.
-Trident Network (TridentNet) aims to generate scale-specific feature maps with a uniform representational power. We construct a parallel multi-branch architecture in which each branch shares the same transformation parameters but with different receptive fields. TridentNet-Fast is a fast approximation version of TridentNet that could achieve significant improvements without any additional parameters and computational cost.
-
-## Training
-
-To train a model, run
-```bash
-python /path/to/detectron2/projects/TridentNet/train_net.py --config-file
-```
-
-For example, to launch end-to-end TridentNet training with ResNet-50 backbone on 8 GPUs,
-one should execute:
-```bash
-python /path/to/detectron2/projects/TridentNet/train_net.py --config-file configs/tridentnet_fast_R_50_C4_1x.yaml --num-gpus 8
-```
-
-## Evaluation
-
-Model evaluation can be done similarly:
-```bash
-python /path/to/detectron2/projects/TridentNet/train_net.py --config-file configs/tridentnet_fast_R_50_C4_1x.yaml --eval-only MODEL.WEIGHTS model.pth
-```
-
-## Results on MS-COCO in Detectron2
-
-|Model|Backbone|Head|lr sched|AP|AP50|AP75|APs|APm|APl|download|
-|-----|--------|----|--------|--|----|----|---|---|---|--------|
-|Faster|R50-C4|C5-512ROI|1X|35.7|56.1|38.0|19.2|40.9|48.7|model \| metrics|
-|TridentFast|R50-C4|C5-128ROI|1X|38.0|58.1|40.8|19.5|42.2|54.6|model \| metrics|
-|Faster|R50-C4|C5-512ROI|3X|38.4|58.7|41.3|20.7|42.7|53.1|model \| metrics|
-|TridentFast|R50-C4|C5-128ROI|3X|40.6|60.8|43.6|23.4|44.7|57.1|model \| metrics|
-|Faster|R101-C4|C5-512ROI|3X|41.1|61.4|44.0|22.2|45.5|55.9|model \| metrics|
-|TridentFast|R101-C4|C5-128ROI|3X|43.6|63.4|47.0|24.3|47.8|60.0|model \| metrics|
-
-
-## Citing TridentNet
-
-If you use TridentNet, please use the following BibTeX entry.
-
-```
-@InProceedings{li2019scale,
- title={Scale-Aware Trident Networks for Object Detection},
- author={Li, Yanghao and Chen, Yuntao and Wang, Naiyan and Zhang, Zhaoxiang},
- journal={The International Conference on Computer Vision (ICCV)},
- year={2019}
-}
-```
-
diff --git a/spaces/cass1337/sdcharactercreator/README.md b/spaces/cass1337/sdcharactercreator/README.md
deleted file mode 100644
index ff1c9d3d23bb65d25572b991408d907c710ba166..0000000000000000000000000000000000000000
--- a/spaces/cass1337/sdcharactercreator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sdcharactercreator
-emoji: 📚
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/mmbench.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/mmbench.py
deleted file mode 100644
index 0a6cdba9ce2b79d20ab22d00034ecd3b03ac78f5..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/task/mmbench.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import base64
-import io
-import random
-
-import pandas as pd
-from PIL import Image
-from torch.utils.data import Dataset
-from open_flamingo.eval.task.utils import get_object_from_text
-
-def decode_base64_to_image(base64_string):
- image_data = base64.b64decode(base64_string)
- image = Image.open(io.BytesIO(image_data))
- return image
-
-class MMBenchDataset(Dataset):
- def __init__(self,
- data_file,
- sys_prompt='There are several options:'):
- self.df = pd.read_csv(data_file, sep='\t')
- self.sys_prompt = sys_prompt
-
- def __len__(self):
- return len(self.df)
-
- def __getitem__(self, idx):
- index = self.df.iloc[idx]['index']
- image = self.df.iloc[idx]['image']
- image = decode_base64_to_image(image)
- question = self.df.iloc[idx]['question']
- answer = self.df.iloc[idx]['answer'] if 'answer' in self.df.iloc[0].keys() else None
- catetory = self.df.iloc[idx]['category']
- l2_catetory = self.df.iloc[idx]['l2-category']
-
- option_candidate = ['A', 'B', 'C', 'D', 'E']
- options = {
- cand: self.load_from_df(idx, cand)
- for cand in option_candidate
- if self.load_from_df(idx, cand) is not None
- }
- options_prompt = f'{self.sys_prompt}\n'
- for key, item in options.items():
- options_prompt += f'{key}. {item}\n'
-
- hint = self.load_from_df(idx, 'hint')
- data = {
- 'img': image,
- 'question': question,
- 'answer': answer,
- 'options': options_prompt,
- 'category': catetory,
- 'l2-category': l2_catetory,
- 'options_dict': options,
- 'index': index,
- 'context': hint,
- }
- return data
- def load_from_df(self, idx, key):
- if key in self.df.iloc[idx] and not pd.isna(self.df.iloc[idx][key]):
- return self.df.iloc[idx][key]
- else:
- return None
-
-
-def evaluate_mmbench(
- model,
- tokenizer,
- image_processor,
- batch_size=1,
- image_dir_path=None,
- questions_json_path=None,
- annotations_json_path=None,
- vis_embed_size=None,
- rank=0,
- world_size=1,
- id=0,
-):
- dataset_name = "mmbench"
- dataset = MMBenchDataset("/gpfs/u/home/LMCG/LMCGljnn/scratch/datasets/raw/mmbench/mmbench_dev_20230712.tsv")
- for sample in dataset:
- print(sample)
-
-
-if __name__ == '__main__':
- evaluate_mmbench(None, None, None)
diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/test_accelerate_examples.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/test_accelerate_examples.py
deleted file mode 100644
index d88a2ead64b4ae33600450243166c5bcde6f5914..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/pytorch/test_accelerate_examples.py
+++ /dev/null
@@ -1,334 +0,0 @@
-# coding=utf-8
-# Copyright 2018 HuggingFace Inc..
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import argparse
-import json
-import logging
-import os
-import shutil
-import sys
-import tempfile
-from unittest import mock
-
-import torch
-from accelerate.utils import write_basic_config
-
-from transformers.testing_utils import TestCasePlus, get_gpu_count, run_command, slow, torch_device
-from transformers.utils import is_apex_available
-
-
-logging.basicConfig(level=logging.DEBUG)
-
-logger = logging.getLogger()
-
-
-def get_setup_file():
- parser = argparse.ArgumentParser()
- parser.add_argument("-f")
- args = parser.parse_args()
- return args.f
-
-
-def get_results(output_dir):
- results = {}
- path = os.path.join(output_dir, "all_results.json")
- if os.path.exists(path):
- with open(path, "r") as f:
- results = json.load(f)
- else:
- raise ValueError(f"can't find {path}")
- return results
-
-
-def is_cuda_and_apex_available():
- is_using_cuda = torch.cuda.is_available() and torch_device == "cuda"
- return is_using_cuda and is_apex_available()
-
-
-stream_handler = logging.StreamHandler(sys.stdout)
-logger.addHandler(stream_handler)
-
-
-class ExamplesTestsNoTrainer(TestCasePlus):
- @classmethod
- def setUpClass(cls):
- # Write Accelerate config, will pick up on CPU, GPU, and multi-GPU
- cls.tmpdir = tempfile.mkdtemp()
- cls.configPath = os.path.join(cls.tmpdir, "default_config.yml")
- write_basic_config(save_location=cls.configPath)
- cls._launch_args = ["accelerate", "launch", "--config_file", cls.configPath]
-
- @classmethod
- def tearDownClass(cls):
- shutil.rmtree(cls.tmpdir)
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_glue_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/text-classification/run_glue_no_trainer.py
- --model_name_or_path distilbert-base-uncased
- --output_dir {tmp_dir}
- --train_file ./tests/fixtures/tests_samples/MRPC/train.csv
- --validation_file ./tests/fixtures/tests_samples/MRPC/dev.csv
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --learning_rate=1e-4
- --seed=42
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- if is_cuda_and_apex_available():
- testargs.append("--fp16")
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_accuracy"], 0.75)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "glue_no_trainer")))
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_clm_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/language-modeling/run_clm_no_trainer.py
- --model_name_or_path distilgpt2
- --train_file ./tests/fixtures/sample_text.txt
- --validation_file ./tests/fixtures/sample_text.txt
- --block_size 128
- --per_device_train_batch_size 5
- --per_device_eval_batch_size 5
- --num_train_epochs 2
- --output_dir {tmp_dir}
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- if torch.cuda.device_count() > 1:
- # Skipping because there are not enough batches to train the model + would need a drop_last to work.
- return
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertLess(result["perplexity"], 100)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "clm_no_trainer")))
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_mlm_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/language-modeling/run_mlm_no_trainer.py
- --model_name_or_path distilroberta-base
- --train_file ./tests/fixtures/sample_text.txt
- --validation_file ./tests/fixtures/sample_text.txt
- --output_dir {tmp_dir}
- --num_train_epochs=1
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertLess(result["perplexity"], 42)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "mlm_no_trainer")))
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_ner_no_trainer(self):
- # with so little data distributed training needs more epochs to get the score on par with 0/1 gpu
- epochs = 7 if get_gpu_count() > 1 else 2
-
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/token-classification/run_ner_no_trainer.py
- --model_name_or_path bert-base-uncased
- --train_file tests/fixtures/tests_samples/conll/sample.json
- --validation_file tests/fixtures/tests_samples/conll/sample.json
- --output_dir {tmp_dir}
- --learning_rate=2e-4
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=2
- --num_train_epochs={epochs}
- --seed 7
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_accuracy"], 0.75)
- self.assertLess(result["train_loss"], 0.5)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "ner_no_trainer")))
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_squad_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/question-answering/run_qa_no_trainer.py
- --model_name_or_path bert-base-uncased
- --version_2_with_negative
- --train_file tests/fixtures/tests_samples/SQUAD/sample.json
- --validation_file tests/fixtures/tests_samples/SQUAD/sample.json
- --output_dir {tmp_dir}
- --seed=42
- --max_train_steps=10
- --num_warmup_steps=2
- --learning_rate=2e-4
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- # Because we use --version_2_with_negative the testing script uses SQuAD v2 metrics.
- self.assertGreaterEqual(result["eval_f1"], 28)
- self.assertGreaterEqual(result["eval_exact"], 28)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "qa_no_trainer")))
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_swag_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/multiple-choice/run_swag_no_trainer.py
- --model_name_or_path bert-base-uncased
- --train_file tests/fixtures/tests_samples/swag/sample.json
- --validation_file tests/fixtures/tests_samples/swag/sample.json
- --output_dir {tmp_dir}
- --max_train_steps=20
- --num_warmup_steps=2
- --learning_rate=2e-4
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_accuracy"], 0.8)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "swag_no_trainer")))
-
- @slow
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_summarization_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/summarization/run_summarization_no_trainer.py
- --model_name_or_path t5-small
- --train_file tests/fixtures/tests_samples/xsum/sample.json
- --validation_file tests/fixtures/tests_samples/xsum/sample.json
- --output_dir {tmp_dir}
- --max_train_steps=50
- --num_warmup_steps=8
- --learning_rate=2e-4
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_rouge1"], 10)
- self.assertGreaterEqual(result["eval_rouge2"], 2)
- self.assertGreaterEqual(result["eval_rougeL"], 7)
- self.assertGreaterEqual(result["eval_rougeLsum"], 7)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "summarization_no_trainer")))
-
- @slow
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_translation_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/translation/run_translation_no_trainer.py
- --model_name_or_path sshleifer/student_marian_en_ro_6_1
- --source_lang en
- --target_lang ro
- --train_file tests/fixtures/tests_samples/wmt16/sample.json
- --validation_file tests/fixtures/tests_samples/wmt16/sample.json
- --output_dir {tmp_dir}
- --max_train_steps=50
- --num_warmup_steps=8
- --learning_rate=3e-3
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --source_lang en_XX
- --target_lang ro_RO
- --checkpointing_steps epoch
- --with_tracking
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_bleu"], 30)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "epoch_0")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "translation_no_trainer")))
-
- @slow
- def test_run_semantic_segmentation_no_trainer(self):
- stream_handler = logging.StreamHandler(sys.stdout)
- logger.addHandler(stream_handler)
-
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py
- --dataset_name huggingface/semantic-segmentation-test-sample
- --output_dir {tmp_dir}
- --max_train_steps=10
- --num_warmup_steps=2
- --learning_rate=2e-4
- --per_device_train_batch_size=2
- --per_device_eval_batch_size=1
- --checkpointing_steps epoch
- """.split()
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- self.assertGreaterEqual(result["eval_overall_accuracy"], 0.10)
-
- @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"})
- def test_run_image_classification_no_trainer(self):
- tmp_dir = self.get_auto_remove_tmp_dir()
- testargs = f"""
- {self.examples_dir}/pytorch/image-classification/run_image_classification_no_trainer.py
- --model_name_or_path google/vit-base-patch16-224-in21k
- --dataset_name hf-internal-testing/cats_vs_dogs_sample
- --learning_rate 1e-4
- --per_device_train_batch_size 2
- --per_device_eval_batch_size 1
- --max_train_steps 2
- --train_val_split 0.1
- --seed 42
- --output_dir {tmp_dir}
- --with_tracking
- --checkpointing_steps 1
- """.split()
-
- if is_cuda_and_apex_available():
- testargs.append("--fp16")
-
- run_command(self._launch_args + testargs)
- result = get_results(tmp_dir)
- # The base model scores a 25%
- self.assertGreaterEqual(result["eval_accuracy"], 0.6)
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "step_1")))
- self.assertTrue(os.path.exists(os.path.join(tmp_dir, "image_classification_no_trainer")))
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/big_bird/evaluate.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/big_bird/evaluate.py
deleted file mode 100644
index 04e9e01ca237bda5ac87e0e8b603dc1b1b9a0ac9..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/big_bird/evaluate.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import jax
-import jax.numpy as jnp
-from bigbird_flax import FlaxBigBirdForNaturalQuestions
-from datasets import load_from_disk
-
-from transformers import BigBirdTokenizerFast
-
-
-CATEGORY_MAPPING = {0: "null", 1: "short", 2: "long", 3: "yes", 4: "no"}
-PUNCTUATION_SET_TO_EXCLUDE = set("".join(["‘", "’", "´", "`", ".", ",", "-", '"']))
-
-
-def get_sub_answers(answers, begin=0, end=None):
- return [" ".join(x.split(" ")[begin:end]) for x in answers if len(x.split(" ")) > 1]
-
-
-def expand_to_aliases(given_answers, make_sub_answers=False):
- if make_sub_answers:
- # if answers are longer than one word, make sure a predictions is correct if it coresponds to the complete 1: or :-1 sub word
- # *e.g.* if the correct answer contains a prefix such as "the", or "a"
- given_answers = (
- given_answers + get_sub_answers(given_answers, begin=1) + get_sub_answers(given_answers, end=-1)
- )
- answers = []
- for answer in given_answers:
- alias = answer.replace("_", " ").lower()
- alias = "".join(c if c not in PUNCTUATION_SET_TO_EXCLUDE else " " for c in alias)
- answers.append(" ".join(alias.split()).strip())
- return set(answers)
-
-
-def get_best_valid_start_end_idx(start_scores, end_scores, top_k=1, max_size=100):
- best_start_scores, best_start_idx = jax.lax.top_k(start_scores, top_k)
- best_end_scores, best_end_idx = jax.lax.top_k(end_scores, top_k)
-
- widths = best_end_idx[:, None] - best_start_idx[None, :]
- mask = jnp.logical_or(widths < 0, widths > max_size)
- scores = (best_end_scores[:, None] + best_start_scores[None, :]) - (1e8 * mask)
- best_score = jnp.argmax(scores).item()
-
- return best_start_idx[best_score % top_k], best_end_idx[best_score // top_k]
-
-
-def format_dataset(sample):
- question = sample["question"]["text"]
- context = sample["document"]["tokens"]["token"]
- is_html = sample["document"]["tokens"]["is_html"]
- long_answers = sample["annotations"]["long_answer"]
- short_answers = sample["annotations"]["short_answers"]
-
- context_string = " ".join([context[i] for i in range(len(context)) if not is_html[i]])
-
- # 0 - No ; 1 - Yes
- for answer in sample["annotations"]["yes_no_answer"]:
- if answer == 0 or answer == 1:
- return {
- "question": question,
- "context": context_string,
- "short": [],
- "long": [],
- "category": "no" if answer == 0 else "yes",
- }
-
- short_targets = []
- for s in short_answers:
- short_targets.extend(s["text"])
- short_targets = list(set(short_targets))
-
- long_targets = []
- for s in long_answers:
- if s["start_token"] == -1:
- continue
- answer = context[s["start_token"] : s["end_token"]]
- html = is_html[s["start_token"] : s["end_token"]]
- new_answer = " ".join([answer[i] for i in range(len(answer)) if not html[i]])
- if new_answer not in long_targets:
- long_targets.append(new_answer)
-
- category = "long_short" if len(short_targets + long_targets) > 0 else "null"
-
- return {
- "question": question,
- "context": context_string,
- "short": short_targets,
- "long": long_targets,
- "category": category,
- }
-
-
-def main():
- dataset = load_from_disk("natural-questions-validation")
- dataset = dataset.map(format_dataset).remove_columns(["annotations", "document", "id"])
- print(dataset)
-
- short_validation_dataset = dataset.filter(lambda x: (len(x["question"]) + len(x["context"])) < 4 * 4096)
- short_validation_dataset = short_validation_dataset.filter(lambda x: x["category"] != "null")
- short_validation_dataset
-
- model_id = "vasudevgupta/flax-bigbird-natural-questions"
- model = FlaxBigBirdForNaturalQuestions.from_pretrained(model_id)
- tokenizer = BigBirdTokenizerFast.from_pretrained(model_id)
-
- @jax.jit
- def forward(*args, **kwargs):
- start_logits, end_logits, pooled_logits = model(*args, **kwargs)
- return start_logits, end_logits, jnp.argmax(pooled_logits, axis=-1)
-
- def evaluate(example):
- # encode question and context so that they are separated by a tokenizer.sep_token and cut at max_length
- inputs = tokenizer(
- example["question"],
- example["context"],
- return_tensors="np",
- max_length=4096,
- padding="max_length",
- truncation=True,
- )
-
- start_scores, end_scores, category = forward(**inputs)
-
- predicted_category = CATEGORY_MAPPING[category.item()]
-
- example["targets"] = example["long"] + example["short"]
- if example["category"] in ["yes", "no", "null"]:
- example["targets"] = [example["category"]]
- example["has_tgt"] = example["category"] != "null"
- # Now target can be: "yes", "no", "null", "list of long & short answers"
-
- if predicted_category in ["yes", "no", "null"]:
- example["output"] = [predicted_category]
- example["match"] = example["output"] == example["targets"]
- example["has_pred"] = predicted_category != "null"
- return example
-
- max_size = 38 if predicted_category == "short" else 1024
- start_score, end_score = get_best_valid_start_end_idx(
- start_scores[0], end_scores[0], top_k=8, max_size=max_size
- )
-
- input_ids = inputs["input_ids"][0].tolist()
- example["output"] = [tokenizer.decode(input_ids[start_score : end_score + 1])]
-
- answers = expand_to_aliases(example["targets"], make_sub_answers=True)
- predictions = expand_to_aliases(example["output"])
-
- # some preprocessing to both prediction and answer
- answers = {"".join(a.split()) for a in answers}
- predictions = {"".join(p.split()) for p in predictions}
- predictions = {s for s in predictions if s not in ["``", "''", "`", "'"]}
-
- # if there is a common element, it's a exact match
- example["match"] = len(list(answers & predictions)) > 0
- example["has_pred"] = predicted_category != "null" and len(predictions) > 0
-
- return example
-
- short_validation_dataset = short_validation_dataset.map(evaluate)
-
- total = len(short_validation_dataset)
- matched = len(short_validation_dataset.filter(lambda x: x["match"] == 1))
- print("EM score:", (matched / total) * 100, "%")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/generation/configuration_utils.py b/spaces/chendl/compositional_test/transformers/src/transformers/generation/configuration_utils.py
deleted file mode 100644
index 1df7b57c735af349789373034ada8fac6c88c2d1..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/src/transformers/generation/configuration_utils.py
+++ /dev/null
@@ -1,714 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Generation configuration class and utilities."""
-
-import copy
-import json
-import os
-from typing import Any, Dict, Optional, Union
-
-from .. import __version__
-from ..configuration_utils import PretrainedConfig
-from ..utils import (
- GENERATION_CONFIG_NAME,
- PushToHubMixin,
- cached_file,
- download_url,
- extract_commit_hash,
- is_remote_url,
- logging,
-)
-
-
-logger = logging.get_logger(__name__)
-
-
-class GenerationConfig(PushToHubMixin):
- r"""
- Class that holds a configuration for a generation task. A `generate` call supports the following generation methods
- for text-decoder, text-to-text, speech-to-text, and vision-to-text models:
-
- - *greedy decoding* by calling [`~generation.GenerationMixin.greedy_search`] if `num_beams=1` and
- `do_sample=False`
- - *contrastive search* by calling [`~generation.GenerationMixin.contrastive_search`] if `penalty_alpha>0.`
- and `top_k>1`
- - *multinomial sampling* by calling [`~generation.GenerationMixin.sample`] if `num_beams=1` and
- `do_sample=True`
- - *beam-search decoding* by calling [`~generation.GenerationMixin.beam_search`] if `num_beams>1` and
- `do_sample=False`
- - *beam-search multinomial sampling* by calling [`~generation.GenerationMixin.beam_sample`] if
- `num_beams>1` and `do_sample=True`
- - *diverse beam-search decoding* by calling [`~generation.GenerationMixin.group_beam_search`], if
- `num_beams>1` and `num_beam_groups>1`
- - *constrained beam-search decoding* by calling [`~generation.GenerationMixin.constrained_beam_search`], if
- `constraints!=None` or `force_words_ids!=None`
-
- You do not need to call any of the above methods directly. Pass custom parameter values to 'generate'. To learn
- more about decoding strategies refer to the [text generation strategies guide](../generation_strategies).
-
- Arg:
- > Parameters that control the length of the output
-
- max_length (`int`, *optional*, defaults to 20):
- The maximum length the generated tokens can have. Corresponds to the length of the input prompt +
- `max_new_tokens`. Its effect is overridden by `max_new_tokens`, if also set.
- max_new_tokens (`int`, *optional*):
- The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
- min_length (`int`, *optional*, defaults to 0):
- The minimum length of the sequence to be generated. Corresponds to the length of the input prompt +
- `min_new_tokens`. Its effect is overridden by `min_new_tokens`, if also set.
- min_new_tokens (`int`, *optional*):
- The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt.
- early_stopping (`bool` or `str`, *optional*, defaults to `False`):
- Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values:
- `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an
- heuristic is applied and the generation stops when is it very unlikely to find better candidates;
- `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical
- beam search algorithm).
- max_time(`float`, *optional*):
- The maximum amount of time you allow the computation to run for in seconds. generation will still finish
- the current pass after allocated time has been passed.
-
- > Parameters that control the generation strategy used
-
- do_sample (`bool`, *optional*, defaults to `False`):
- Whether or not to use sampling ; use greedy decoding otherwise.
- num_beams (`int`, *optional*, defaults to 1):
- Number of beams for beam search. 1 means no beam search.
- num_beam_groups (`int`, *optional*, defaults to 1):
- Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams.
- [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details.
- penalty_alpha (`float`, *optional*):
- The values balance the model confidence and the degeneration penalty in contrastive search decoding.
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether or not the model should use the past last key/values attentions (if applicable to the model) to
- speed up decoding.
-
- > Parameters for manipulation of the model output logits
-
- temperature (`float`, *optional*, defaults to 1.0):
- The value used to modulate the next token probabilities.
- top_k (`int`, *optional*, defaults to 50):
- The number of highest probability vocabulary tokens to keep for top-k-filtering.
- top_p (`float`, *optional*, defaults to 1.0):
- If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to
- `top_p` or higher are kept for generation.
- typical_p (`float`, *optional*, defaults to 1.0):
- Local typicality measures how similar the conditional probability of predicting a target token next is to
- the expected conditional probability of predicting a random token next, given the partial text already
- generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that
- add up to `typical_p` or higher are kept for generation. See [this
- paper](https://arxiv.org/pdf/2202.00666.pdf) for more details.
- epsilon_cutoff (`float`, *optional*, defaults to 0.0):
- If set to float strictly between 0 and 1, only tokens with a conditional probability greater than
- `epsilon_cutoff` will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the
- size of the model. See [Truncation Sampling as Language Model
- Desmoothing](https://arxiv.org/abs/2210.15191) for more details.
- eta_cutoff (`float`, *optional*, defaults to 0.0):
- Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between
- 0 and 1, a token is only considered if it is greater than either `eta_cutoff` or `sqrt(eta_cutoff) *
- exp(-entropy(softmax(next_token_logits)))`. The latter term is intuitively the expected next token
- probability, scaled by `sqrt(eta_cutoff)`. In the paper, suggested values range from 3e-4 to 2e-3,
- depending on the size of the model. See [Truncation Sampling as Language Model
- Desmoothing](https://arxiv.org/abs/2210.15191) for more details.
- diversity_penalty (`float`, *optional*, defaults to 0.0):
- This value is subtracted from a beam's score if it generates a token same as any beam from other group at a
- particular time. Note that `diversity_penalty` is only effective if `group beam search` is enabled.
- repetition_penalty (`float`, *optional*, defaults to 1.0):
- The parameter for repetition penalty. 1.0 means no penalty. See [this
- paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
- encoder_repetition_penalty (`float`, *optional*, defaults to 1.0):
- The paramater for encoder_repetition_penalty. An exponential penalty on sequences that are not in the
- original input. 1.0 means no penalty.
- length_penalty (`float`, *optional*, defaults to 1.0):
- Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
- the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
- likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
- `length_penalty` < 0.0 encourages shorter sequences.
- no_repeat_ngram_size (`int`, *optional*, defaults to 0):
- If set to int > 0, all ngrams of that size can only occur once.
- bad_words_ids(`List[List[int]]`, *optional*):
- List of token ids that are not allowed to be generated. In order to get the token ids of the words that
- should not appear in the generated text, use `tokenizer(bad_words, add_prefix_space=True,
- add_special_tokens=False).input_ids`.
- force_words_ids(`List[List[int]]` or `List[List[List[int]]]`, *optional*):
- List of token ids that must be generated. If given a `List[List[int]]`, this is treated as a simple list of
- words that must be included, the opposite to `bad_words_ids`. If given `List[List[List[int]]]`, this
- triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), where one
- can allow different forms of each word.
- renormalize_logits (`bool`, *optional*, defaults to `False`):
- Whether to renormalize the logits after applying all the logits processors or warpers (including the custom
- ones). It's highly recommended to set this flag to `True` as the search algorithms suppose the score logits
- are normalized but some logit processors or warpers break the normalization.
- constraints (`List[Constraint]`, *optional*):
- Custom constraints that can be added to the generation to ensure that the output will contain the use of
- certain tokens as defined by `Constraint` objects, in the most sensible way possible.
- forced_bos_token_id (`int`, *optional*, defaults to `model.config.forced_bos_token_id`):
- The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for
- multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target
- language token.
- forced_eos_token_id (`Union[int, List[int]]`, *optional*, defaults to `model.config.forced_eos_token_id`):
- The id of the token to force as the last generated token when `max_length` is reached. Optionally, use a
- list to set multiple *end-of-sequence* tokens.
- remove_invalid_values (`bool`, *optional*, defaults to `model.config.remove_invalid_values`):
- Whether to remove possible *nan* and *inf* outputs of the model to prevent the generation method to crash.
- Note that using `remove_invalid_values` can slow down generation.
- exponential_decay_length_penalty (`tuple(int, float)`, *optional*):
- This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been
- generated. The tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates where
- penalty starts and `decay_factor` represents the factor of exponential decay
- suppress_tokens (`List[int]`, *optional*):
- A list of tokens that will be suppressed at generation. The `SupressTokens` logit processor will set their
- log probs to `-inf` so that they are not sampled.
- begin_suppress_tokens (`List[int]`, *optional*):
- A list of tokens that will be suppressed at the beginning of the generation. The `SupressBeginTokens` logit
- processor will set their log probs to `-inf` so that they are not sampled.
- forced_decoder_ids (`List[List[int]]`, *optional*):
- A list of pairs of integers which indicates a mapping from generation indices to token indices that will be
- forced before sampling. For example, `[[1, 123]]` means the second generated token will always be a token
- of index 123.
-
- > Parameters that define the output variables of `generate`
-
- num_return_sequences(`int`, *optional*, defaults to 1):
- The number of independently computed returned sequences for each element in the batch.
- output_attentions (`bool`, *optional*, defaults to `False`):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more details.
- output_hidden_states (`bool`, *optional*, defaults to `False`):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more details.
- output_scores (`bool`, *optional*, defaults to `False`):
- Whether or not to return the prediction scores. See `scores` under returned tensors for more details.
- return_dict_in_generate (`bool`, *optional*, defaults to `False`):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-
- > Special tokens that can be used at generation time
-
- pad_token_id (`int`, *optional*):
- The id of the *padding* token.
- bos_token_id (`int`, *optional*):
- The id of the *beginning-of-sequence* token.
- eos_token_id (`Union[int, List[int]]`, *optional*):
- The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.
-
- > Generation parameters exclusive to encoder-decoder models
-
- encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0):
- If set to int > 0, all ngrams of that size that occur in the `encoder_input_ids` cannot occur in the
- `decoder_input_ids`.
- decoder_start_token_id (`int`, *optional*):
- If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token.
-
- > Wild card
-
- generation_kwargs:
- Additional generation kwargs will be forwarded to the `generate` function of the model. Kwargs that are not
- present in `generate`'s signature will be used in the model forward pass.
- """
-
- def __init__(self, **kwargs):
- # Parameters that control the length of the output
- self.max_length = kwargs.pop("max_length", 20)
- self.max_new_tokens = kwargs.pop("max_new_tokens", None)
- self.min_length = kwargs.pop("min_length", 0)
- self.min_new_tokens = kwargs.pop("min_new_tokens", None)
- self.early_stopping = kwargs.pop("early_stopping", False)
- self.max_time = kwargs.pop("max_time", None)
-
- # Parameters that control the generation strategy used
- self.do_sample = kwargs.pop("do_sample", False)
- self.num_beams = kwargs.pop("num_beams", 1)
- self.num_beam_groups = kwargs.pop("num_beam_groups", 1)
- self.penalty_alpha = kwargs.pop("penalty_alpha", None)
- self.use_cache = kwargs.pop("use_cache", True)
-
- # Parameters for manipulation of the model output logits
- self.temperature = kwargs.pop("temperature", 1.0)
- self.top_k = kwargs.pop("top_k", 50)
- self.top_p = kwargs.pop("top_p", 1.0)
- self.typical_p = kwargs.pop("typical_p", 1.0)
- self.epsilon_cutoff = kwargs.pop("epsilon_cutoff", 0.0)
- self.eta_cutoff = kwargs.pop("eta_cutoff", 0.0)
- self.diversity_penalty = kwargs.pop("diversity_penalty", 0.0)
- self.repetition_penalty = kwargs.pop("repetition_penalty", 1.0)
- self.encoder_repetition_penalty = kwargs.pop("encoder_repetition_penalty", 1.0)
- self.length_penalty = kwargs.pop("length_penalty", 1.0)
- self.no_repeat_ngram_size = kwargs.pop("no_repeat_ngram_size", 0)
- self.bad_words_ids = kwargs.pop("bad_words_ids", None)
- self.force_words_ids = kwargs.pop("force_words_ids", None)
- self.renormalize_logits = kwargs.pop("renormalize_logits", False)
- self.constraints = kwargs.pop("constraints", None)
- self.forced_bos_token_id = kwargs.pop("forced_bos_token_id", None)
- self.forced_eos_token_id = kwargs.pop("forced_eos_token_id", None)
- self.remove_invalid_values = kwargs.pop("remove_invalid_values", False)
- self.exponential_decay_length_penalty = kwargs.pop("exponential_decay_length_penalty", None)
- self.suppress_tokens = kwargs.pop("suppress_tokens", None)
- self.begin_suppress_tokens = kwargs.pop("begin_suppress_tokens", None)
- self.forced_decoder_ids = kwargs.pop("forced_decoder_ids", None)
-
- # Parameters that define the output variables of `generate`
- self.num_return_sequences = kwargs.pop("num_return_sequences", 1)
- self.output_attentions = kwargs.pop("output_attentions", False)
- self.output_hidden_states = kwargs.pop("output_hidden_states", False)
- self.output_scores = kwargs.pop("output_scores", False)
- self.return_dict_in_generate = kwargs.pop("return_dict_in_generate", False)
-
- # Special tokens that can be used at generation time
- self.pad_token_id = kwargs.pop("pad_token_id", None)
- self.bos_token_id = kwargs.pop("bos_token_id", None)
- self.eos_token_id = kwargs.pop("eos_token_id", None)
-
- # Generation parameters exclusive to encoder-decoder models
- self.encoder_no_repeat_ngram_size = kwargs.pop("encoder_no_repeat_ngram_size", 0)
- self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None)
-
- # Wild card
- self.generation_kwargs = kwargs.pop("generation_kwargs", {})
-
- # The remaining attributes do not parametrize `.generate()`, but are informative and/or used by the the hub
- # interface.
- self._from_model_config = kwargs.pop("_from_model_config", False)
- self._commit_hash = kwargs.pop("_commit_hash", None)
- self.transformers_version = kwargs.pop("transformers_version", __version__)
-
- # Additional attributes without default values
- if not self._from_model_config:
- # we don't want to copy values from the model config if we're initializing a `GenerationConfig` from a model's default configuration file
- for key, value in kwargs.items():
- try:
- setattr(self, key, value)
- except AttributeError as err:
- logger.error(f"Can't set {key} with value {value} for {self}")
- raise err
-
- # Validate the values of the attributes
- self.validate()
-
- def __eq__(self, other):
- if not isinstance(other, GenerationConfig):
- return False
-
- self_dict = self.__dict__.copy()
- other_dict = other.__dict__.copy()
- # ignore metadata
- for metadata_field in ("_from_model_config", "_commit_hash", "transformers_version"):
- self_dict.pop(metadata_field, None)
- other_dict.pop(metadata_field, None)
- return self_dict == other_dict
-
- def __repr__(self):
- return f"{self.__class__.__name__} {self.to_json_string()}"
-
- def validate(self):
- """
- Validates the values of the attributes of the GenerationConfig instance, and raises a `ValueError` if any of
- the values are invalid.
- """
- if self.early_stopping not in {True, False, "never"}:
- raise ValueError(f"`early_stopping` must be a boolean or 'never', but is {self.early_stopping}.")
-
- def save_pretrained(
- self,
- save_directory: Union[str, os.PathLike],
- config_file_name: Optional[Union[str, os.PathLike]] = None,
- push_to_hub: bool = False,
- **kwargs,
- ):
- r"""
- Save a generation configuration object to the directory `save_directory`, so that it can be re-loaded using the
- [`~GenerationConfig.from_pretrained`] class method.
-
- Args:
- save_directory (`str` or `os.PathLike`):
- Directory where the configuration JSON file will be saved (will be created if it does not exist).
- config_file_name (`str` or `os.PathLike`, *optional*, defaults to `"generation_config.json"`):
- Name of the generation configuration JSON file to be saved in `save_directory`.
- push_to_hub (`bool`, *optional*, defaults to `False`):
- Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
- repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
- namespace).
- kwargs:
- Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
- """
- config_file_name = config_file_name if config_file_name is not None else GENERATION_CONFIG_NAME
-
- if os.path.isfile(save_directory):
- raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")
-
- os.makedirs(save_directory, exist_ok=True)
-
- if push_to_hub:
- commit_message = kwargs.pop("commit_message", None)
- repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1])
- repo_id = self._create_repo(repo_id, **kwargs)
- files_timestamps = self._get_files_timestamps(save_directory)
-
- output_config_file = os.path.join(save_directory, config_file_name)
-
- self.to_json_file(output_config_file, use_diff=True)
- logger.info(f"Configuration saved in {output_config_file}")
-
- if push_to_hub:
- self._upload_modified_files(
- save_directory,
- repo_id,
- files_timestamps,
- commit_message=commit_message,
- token=kwargs.get("use_auth_token"),
- )
-
- @classmethod
- def from_pretrained(
- cls,
- pretrained_model_name: Union[str, os.PathLike],
- config_file_name: Optional[Union[str, os.PathLike]] = None,
- **kwargs,
- ) -> "GenerationConfig":
- r"""
- Instantiate a [`GenerationConfig`] from a generation configuration file.
-
- Args:
- pretrained_model_name (`str` or `os.PathLike`):
- This can be either:
-
- - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
- huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or
- namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`.
- - a path to a *directory* containing a configuration file saved using the
- [`~GenerationConfig.save_pretrained`] method, e.g., `./my_model_directory/`.
- config_file_name (`str` or `os.PathLike`, *optional*, defaults to `"generation_config.json"`):
- Name of the generation configuration JSON file to be loaded from `pretrained_model_name`.
- cache_dir (`str` or `os.PathLike`, *optional*):
- Path to a directory in which a downloaded pretrained model configuration should be cached if the
- standard cache should not be used.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether or not to force to (re-)download the configuration files and override the cached versions if
- they exist.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether or not to delete incompletely received file. Attempts to resume the download if such a file
- exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- use_auth_token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
- revision (`str`, *optional*, defaults to `"main"`):
- The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
- git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
- identifier allowed by git.
-
-
-
- To test a pull request you made on the Hub, you can pass `revision="refs/pr/".
-
-
-
- return_unused_kwargs (`bool`, *optional*, defaults to `False`):
- If `False`, then this function returns just the final configuration object.
-
- If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a
- dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
- part of `kwargs` which has not been used to update `config` and is otherwise ignored.
- subfolder (`str`, *optional*, defaults to `""`):
- In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
- specify the folder name here.
- kwargs (`Dict[str, Any]`, *optional*):
- The values in kwargs of any keys which are configuration attributes will be used to override the loaded
- values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
- by the `return_unused_kwargs` keyword parameter.
-
- Returns:
- [`GenerationConfig`]: The configuration object instantiated from this pretrained model.
-
- Examples:
-
- ```python
- >>> from transformers import GenerationConfig
-
- >>> # Download configuration from huggingface.co and cache.
- >>> generation_config = GenerationConfig.from_pretrained("gpt2")
-
- >>> # E.g. config was saved using *save_pretrained('./test/saved_model/')*
- >>> generation_config.save_pretrained("./test/saved_model/")
- >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/")
-
- >>> # You can also specify configuration names to your generation configuration file
- >>> generation_config.save_pretrained("./test/saved_model/", config_file_name="my_configuration.json")
- >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/", "my_configuration.json")
-
- >>> # If you'd like to try a minor variation to an existing configuration, you can also pass generation
- >>> # arguments to `.from_pretrained()`. Be mindful that typos and unused arguments will be ignored
- >>> generation_config, unused_kwargs = GenerationConfig.from_pretrained(
- ... "gpt2", top_k=1, foo=False, return_unused_kwargs=True
- ... )
- >>> generation_config.top_k
- 1
-
- >>> unused_kwargs
- {'foo': False}
- ```"""
- config_file_name = config_file_name if config_file_name is not None else GENERATION_CONFIG_NAME
-
- cache_dir = kwargs.pop("cache_dir", None)
- force_download = kwargs.pop("force_download", False)
- resume_download = kwargs.pop("resume_download", False)
- proxies = kwargs.pop("proxies", None)
- use_auth_token = kwargs.pop("use_auth_token", None)
- local_files_only = kwargs.pop("local_files_only", False)
- revision = kwargs.pop("revision", None)
- subfolder = kwargs.pop("subfolder", "")
- from_pipeline = kwargs.pop("_from_pipeline", None)
- from_auto_class = kwargs.pop("_from_auto", False)
- commit_hash = kwargs.pop("_commit_hash", None)
-
- user_agent = {"file_type": "config", "from_auto_class": from_auto_class}
- if from_pipeline is not None:
- user_agent["using_pipeline"] = from_pipeline
-
- config_path = os.path.join(pretrained_model_name, config_file_name)
- config_path = str(config_path)
-
- is_local = os.path.exists(config_path)
- if os.path.isfile(os.path.join(subfolder, config_path)):
- # Special case when config_path is a local file
- resolved_config_file = config_path
- is_local = True
- elif is_remote_url(config_path):
- configuration_file = config_path
- resolved_config_file = download_url(config_path)
- else:
- configuration_file = config_file_name
- try:
- # Load from local folder or from cache or download from model Hub and cache
- resolved_config_file = cached_file(
- pretrained_model_name,
- configuration_file,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- user_agent=user_agent,
- revision=revision,
- subfolder=subfolder,
- _commit_hash=commit_hash,
- )
- commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
- except EnvironmentError:
- # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to
- # the original exception.
- raise
- except Exception:
- # For any other exception, we throw a generic error.
- raise EnvironmentError(
- f"Can't load the configuration of '{pretrained_model_name}'. If you were trying to load it"
- " from 'https://huggingface.co/models', make sure you don't have a local directory with the same"
- f" name. Otherwise, make sure '{pretrained_model_name}' is the correct path to a directory"
- f" containing a {configuration_file} file"
- )
-
- try:
- # Load config dict
- config_dict = cls._dict_from_json_file(resolved_config_file)
- config_dict["_commit_hash"] = commit_hash
- except (json.JSONDecodeError, UnicodeDecodeError):
- raise EnvironmentError(
- f"It looks like the config file at '{resolved_config_file}' is not a valid JSON file."
- )
-
- if is_local:
- logger.info(f"loading configuration file {resolved_config_file}")
- else:
- logger.info(f"loading configuration file {configuration_file} from cache at {resolved_config_file}")
-
- return cls.from_dict(config_dict, **kwargs)
-
- @classmethod
- def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
- with open(json_file, "r", encoding="utf-8") as reader:
- text = reader.read()
- return json.loads(text)
-
- @classmethod
- def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "GenerationConfig":
- """
- Instantiates a [`GenerationConfig`] from a Python dictionary of parameters.
-
- Args:
- config_dict (`Dict[str, Any]`):
- Dictionary that will be used to instantiate the configuration object.
- kwargs (`Dict[str, Any]`):
- Additional parameters from which to initialize the configuration object.
-
- Returns:
- [`GenerationConfig`]: The configuration object instantiated from those parameters.
- """
- return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
- # Those arguments may be passed along for our internal telemetry.
- # We remove them so they don't appear in `return_unused_kwargs`.
- kwargs.pop("_from_auto", None)
- kwargs.pop("_from_pipeline", None)
- # The commit hash might have been updated in the `config_dict`, we don't want the kwargs to erase that update.
- if "_commit_hash" in kwargs and "_commit_hash" in config_dict:
- kwargs["_commit_hash"] = config_dict["_commit_hash"]
-
- # remove all the arguments that are in the config_dict
-
- config = cls(**config_dict, **kwargs)
- unused_kwargs = config.update(**kwargs)
-
- logger.info(f"Generate config {config}")
- if return_unused_kwargs:
- return config, unused_kwargs
- else:
- return config
-
- def dict_torch_dtype_to_str(self, d: Dict[str, Any]) -> None:
- """
- Checks whether the passed dictionary and its nested dicts have a *torch_dtype* key and if it's not None,
- converts torch.dtype to a string of just the type. For example, `torch.float32` get converted into *"float32"*
- string, which can then be stored in the json format.
- """
- if d.get("torch_dtype", None) is not None and not isinstance(d["torch_dtype"], str):
- d["torch_dtype"] = str(d["torch_dtype"]).split(".")[1]
- for value in d.values():
- if isinstance(value, dict):
- self.dict_torch_dtype_to_str(value)
-
- def to_diff_dict(self) -> Dict[str, Any]:
- """
- Removes all attributes from config which correspond to the default config attributes for better readability and
- serializes to a Python dictionary.
-
- Returns:
- `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
- """
- config_dict = self.to_dict()
-
- # get the default config dict
- default_config_dict = GenerationConfig().to_dict()
-
- serializable_config_dict = {}
-
- # only serialize values that differ from the default config
- for key, value in config_dict.items():
- if key not in default_config_dict or key == "transformers_version" or value != default_config_dict[key]:
- serializable_config_dict[key] = value
-
- self.dict_torch_dtype_to_str(serializable_config_dict)
- return serializable_config_dict
-
- def to_dict(self) -> Dict[str, Any]:
- """
- Serializes this instance to a Python dictionary.
-
- Returns:
- `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
- """
- output = copy.deepcopy(self.__dict__)
- if "_commit_hash" in output:
- del output["_commit_hash"]
-
- # Transformers version when serializing this file
- output["transformers_version"] = __version__
-
- self.dict_torch_dtype_to_str(output)
- return output
-
- def to_json_string(self, use_diff: bool = True) -> str:
- """
- Serializes this instance to a JSON string.
-
- Args:
- use_diff (`bool`, *optional*, defaults to `True`):
- If set to `True`, only the difference between the config instance and the default `GenerationConfig()`
- is serialized to JSON string.
-
- Returns:
- `str`: String containing all the attributes that make up this configuration instance in JSON format.
- """
- if use_diff is True:
- config_dict = self.to_diff_dict()
- else:
- config_dict = self.to_dict()
- return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
-
- def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True):
- """
- Save this instance to a JSON file.
-
- Args:
- json_file_path (`str` or `os.PathLike`):
- Path to the JSON file in which this configuration instance's parameters will be saved.
- use_diff (`bool`, *optional*, defaults to `True`):
- If set to `True`, only the difference between the config instance and the default `GenerationConfig()`
- is serialized to JSON file.
- """
- with open(json_file_path, "w", encoding="utf-8") as writer:
- writer.write(self.to_json_string(use_diff=use_diff))
-
- @classmethod
- def from_model_config(cls, model_config: PretrainedConfig) -> "GenerationConfig":
- """
- Instantiates a [`GenerationConfig`] from a [`PretrainedConfig`]. This function is useful to convert legacy
- [`PretrainedConfig`] objects, which may contain generation parameters, into a stand-alone [`GenerationConfig`].
-
- Args:
- model_config (`PretrainedConfig`):
- The model config that will be used to instantiate the generation config.
-
- Returns:
- [`GenerationConfig`]: The configuration object instantiated from those parameters.
- """
- config_dict = model_config.to_dict()
- config_dict.pop("_from_model_config", None)
- config = cls.from_dict(config_dict, return_unused_kwargs=False, _from_model_config=True)
-
- # Special case: some models have generation attributes set in the decoder. Use them if still unset in the
- # generation config.
- for decoder_name in ("decoder", "generator", "text_config"):
- if decoder_name in config_dict:
- default_generation_config = GenerationConfig()
- decoder_config = config_dict[decoder_name]
- for attr in config.to_dict().keys():
- if attr in decoder_config and getattr(config, attr) == getattr(default_generation_config, attr):
- setattr(config, attr, decoder_config[attr])
-
- return config
-
- def update(self, **kwargs):
- """
- Updates attributes of this class instance with attributes from `kwargs` if they match existing atributtes,
- returning all the unused kwargs.
-
- Args:
- kwargs (`Dict[str, Any]`):
- Dictionary of attributes to tentatively update this class.
-
- Returns:
- `Dict[str, Any]`: Dictionary containing all the key-value pairs that were not used to update the instance.
- """
- to_remove = []
- for key, value in kwargs.items():
- if hasattr(self, key):
- setattr(self, key, value)
- to_remove.append(key)
-
- # remove all the attributes that were updated, without modifying the input dict
- unused_kwargs = {key: value for key, value in kwargs.items() if key not in to_remove}
- return unused_kwargs
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/dataconv.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/dataconv.py
deleted file mode 100644
index d242cbb9c9727441ef171c773b9bd598e5425731..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/dataconv.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import array
-from datetime import datetime, date, tzinfo
-from ipaddress import IPv4Address
-from typing import Sequence, Optional, Any
-from uuid import UUID, SafeUUID
-
-from clickhouse_connect.driver.common import int_size
-from clickhouse_connect.driver.types import ByteSource
-from clickhouse_connect.driver.options import np
-
-
-MONTH_DAYS = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365)
-MONTH_DAYS_LEAP = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366)
-
-
-def read_ipv4_col(source: ByteSource, num_rows: int):
- column = source.read_array('I', num_rows)
- fast_ip_v4 = IPv4Address.__new__
- new_col = []
- app = new_col.append
- for x in column:
- ipv4 = fast_ip_v4(IPv4Address)
- ipv4._ip = x # pylint: disable=protected-access
- app(ipv4)
- return new_col
-
-
-def read_datetime_col(source: ByteSource, num_rows: int, tz_info: Optional[tzinfo]):
- src_array = source.read_array('I', num_rows)
- if tz_info is None:
- fts = datetime.utcfromtimestamp
- return [fts(ts) for ts in src_array]
- fts = datetime.fromtimestamp
- return [fts(ts, tz_info) for ts in src_array]
-
-
-def epoch_days_to_date(days: int) -> date:
- cycles400, rem = divmod(days + 134774, 146097)
- cycles100, rem = divmod(rem, 36524)
- cycles, rem = divmod(rem, 1461)
- years, rem = divmod(rem, 365)
- year = (cycles << 2) + cycles400 * 400 + cycles100 * 100 + years + 1601
- if years == 4 or cycles100 == 4:
- return date(year - 1, 12, 31)
- m_list = MONTH_DAYS_LEAP if years == 3 and (year == 2000 or year % 100 != 0) else MONTH_DAYS
- month = (rem + 24) >> 5
- while rem < m_list[month]:
- month -= 1
- return date(year, month + 1, rem + 1 - m_list[month])
-
-
-def read_date_col(source: ByteSource, num_rows: int):
- column = source.read_array('H', num_rows)
- return [epoch_days_to_date(x) for x in column]
-
-
-def read_date32_col(source: ByteSource, num_rows: int):
- column = source.read_array('l' if int_size == 2 else 'i', num_rows)
- return [epoch_days_to_date(x) for x in column]
-
-
-def read_uuid_col(source: ByteSource, num_rows: int):
- v = source.read_array('Q', num_rows * 2)
- empty_uuid = UUID(int=0)
- new_uuid = UUID.__new__
- unsafe = SafeUUID.unsafe
- oset = object.__setattr__
- column = []
- app = column.append
- for i in range(num_rows):
- ix = i << 1
- int_value = v[ix] << 64 | v[ix + 1]
- if int_value == 0:
- app(empty_uuid)
- else:
- fast_uuid = new_uuid(UUID)
- oset(fast_uuid, 'int', int_value)
- oset(fast_uuid, 'is_safe', unsafe)
- app(fast_uuid)
- return column
-
-
-def read_nullable_array(source: ByteSource, array_type: str, num_rows: int, null_obj: Any):
- null_map = source.read_bytes(num_rows)
- column = source.read_array(array_type, num_rows)
- return [null_obj if null_map[ix] else column[ix] for ix in range(num_rows)]
-
-
-def build_nullable_column(source: Sequence, null_map: bytes, null_obj: Any):
- return [source[ix] if null_map[ix] == 0 else null_obj for ix in range(len(source))]
-
-
-def build_lc_nullable_column(keys: Sequence, index: array.array, null_obj: Any):
- column = []
- for ix in index:
- if ix == 0:
- column.append(null_obj)
- else:
- column.append(keys[ix])
- return column
-
-
-def to_numpy_array(column: Sequence):
- arr = np.empty((len(column),), dtype=np.object)
- arr[:] = column
- return arr
-
-
-def pivot(data: Sequence[Sequence], start_row: int, end_row: int) -> Sequence[Sequence]:
- return tuple(zip(*data[start_row: end_row]))
-
-
-def write_str_col(column: Sequence, encoding: Optional[str], dest: bytearray):
- app = dest.append
- for x in column:
- if not x:
- app(0)
- else:
- if encoding:
- x = x.encode(encoding)
- sz = len(x)
- while True:
- b = sz & 0x7f
- sz >>= 7
- if sz == 0:
- app(b)
- break
- app(0x80 | b)
- dest += x
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/extensions.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/extensions.py
deleted file mode 100644
index ac99592f55a73a62e70dae2fad3c696635129bdd..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/x509/extensions.py
+++ /dev/null
@@ -1,2215 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from __future__ import annotations
-
-import abc
-import datetime
-import hashlib
-import ipaddress
-import typing
-
-from cryptography import utils
-from cryptography.hazmat.bindings._rust import asn1
-from cryptography.hazmat.bindings._rust import x509 as rust_x509
-from cryptography.hazmat.primitives import constant_time, serialization
-from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePublicKey
-from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey
-from cryptography.hazmat.primitives.asymmetric.types import (
- CertificateIssuerPublicKeyTypes,
- CertificatePublicKeyTypes,
-)
-from cryptography.x509.certificate_transparency import (
- SignedCertificateTimestamp,
-)
-from cryptography.x509.general_name import (
- DirectoryName,
- DNSName,
- GeneralName,
- IPAddress,
- OtherName,
- RegisteredID,
- RFC822Name,
- UniformResourceIdentifier,
- _IPAddressTypes,
-)
-from cryptography.x509.name import Name, RelativeDistinguishedName
-from cryptography.x509.oid import (
- CRLEntryExtensionOID,
- ExtensionOID,
- ObjectIdentifier,
- OCSPExtensionOID,
-)
-
-ExtensionTypeVar = typing.TypeVar(
- "ExtensionTypeVar", bound="ExtensionType", covariant=True
-)
-
-
-def _key_identifier_from_public_key(
- public_key: CertificatePublicKeyTypes,
-) -> bytes:
- if isinstance(public_key, RSAPublicKey):
- data = public_key.public_bytes(
- serialization.Encoding.DER,
- serialization.PublicFormat.PKCS1,
- )
- elif isinstance(public_key, EllipticCurvePublicKey):
- data = public_key.public_bytes(
- serialization.Encoding.X962,
- serialization.PublicFormat.UncompressedPoint,
- )
- else:
- # This is a very slow way to do this.
- serialized = public_key.public_bytes(
- serialization.Encoding.DER,
- serialization.PublicFormat.SubjectPublicKeyInfo,
- )
- data = asn1.parse_spki_for_data(serialized)
-
- return hashlib.sha1(data).digest()
-
-
-def _make_sequence_methods(field_name: str):
- def len_method(self) -> int:
- return len(getattr(self, field_name))
-
- def iter_method(self):
- return iter(getattr(self, field_name))
-
- def getitem_method(self, idx):
- return getattr(self, field_name)[idx]
-
- return len_method, iter_method, getitem_method
-
-
-class DuplicateExtension(Exception):
- def __init__(self, msg: str, oid: ObjectIdentifier) -> None:
- super().__init__(msg)
- self.oid = oid
-
-
-class ExtensionNotFound(Exception):
- def __init__(self, msg: str, oid: ObjectIdentifier) -> None:
- super().__init__(msg)
- self.oid = oid
-
-
-class ExtensionType(metaclass=abc.ABCMeta):
- oid: typing.ClassVar[ObjectIdentifier]
-
- def public_bytes(self) -> bytes:
- """
- Serializes the extension type to DER.
- """
- raise NotImplementedError(
- "public_bytes is not implemented for extension type {!r}".format(
- self
- )
- )
-
-
-class Extensions:
- def __init__(
- self, extensions: typing.Iterable[Extension[ExtensionType]]
- ) -> None:
- self._extensions = list(extensions)
-
- def get_extension_for_oid(
- self, oid: ObjectIdentifier
- ) -> Extension[ExtensionType]:
- for ext in self:
- if ext.oid == oid:
- return ext
-
- raise ExtensionNotFound(f"No {oid} extension was found", oid)
-
- def get_extension_for_class(
- self, extclass: typing.Type[ExtensionTypeVar]
- ) -> Extension[ExtensionTypeVar]:
- if extclass is UnrecognizedExtension:
- raise TypeError(
- "UnrecognizedExtension can't be used with "
- "get_extension_for_class because more than one instance of the"
- " class may be present."
- )
-
- for ext in self:
- if isinstance(ext.value, extclass):
- return ext
-
- raise ExtensionNotFound(
- f"No {extclass} extension was found", extclass.oid
- )
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_extensions")
-
- def __repr__(self) -> str:
- return f""
-
-
-class CRLNumber(ExtensionType):
- oid = ExtensionOID.CRL_NUMBER
-
- def __init__(self, crl_number: int) -> None:
- if not isinstance(crl_number, int):
- raise TypeError("crl_number must be an integer")
-
- self._crl_number = crl_number
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CRLNumber):
- return NotImplemented
-
- return self.crl_number == other.crl_number
-
- def __hash__(self) -> int:
- return hash(self.crl_number)
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def crl_number(self) -> int:
- return self._crl_number
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class AuthorityKeyIdentifier(ExtensionType):
- oid = ExtensionOID.AUTHORITY_KEY_IDENTIFIER
-
- def __init__(
- self,
- key_identifier: typing.Optional[bytes],
- authority_cert_issuer: typing.Optional[typing.Iterable[GeneralName]],
- authority_cert_serial_number: typing.Optional[int],
- ) -> None:
- if (authority_cert_issuer is None) != (
- authority_cert_serial_number is None
- ):
- raise ValueError(
- "authority_cert_issuer and authority_cert_serial_number "
- "must both be present or both None"
- )
-
- if authority_cert_issuer is not None:
- authority_cert_issuer = list(authority_cert_issuer)
- if not all(
- isinstance(x, GeneralName) for x in authority_cert_issuer
- ):
- raise TypeError(
- "authority_cert_issuer must be a list of GeneralName "
- "objects"
- )
-
- if authority_cert_serial_number is not None and not isinstance(
- authority_cert_serial_number, int
- ):
- raise TypeError("authority_cert_serial_number must be an integer")
-
- self._key_identifier = key_identifier
- self._authority_cert_issuer = authority_cert_issuer
- self._authority_cert_serial_number = authority_cert_serial_number
-
- # This takes a subset of CertificatePublicKeyTypes because an issuer
- # cannot have an X25519/X448 key. This introduces some unfortunate
- # asymmetry that requires typing users to explicitly
- # narrow their type, but we should make this accurate and not just
- # convenient.
- @classmethod
- def from_issuer_public_key(
- cls, public_key: CertificateIssuerPublicKeyTypes
- ) -> AuthorityKeyIdentifier:
- digest = _key_identifier_from_public_key(public_key)
- return cls(
- key_identifier=digest,
- authority_cert_issuer=None,
- authority_cert_serial_number=None,
- )
-
- @classmethod
- def from_issuer_subject_key_identifier(
- cls, ski: SubjectKeyIdentifier
- ) -> AuthorityKeyIdentifier:
- return cls(
- key_identifier=ski.digest,
- authority_cert_issuer=None,
- authority_cert_serial_number=None,
- )
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, AuthorityKeyIdentifier):
- return NotImplemented
-
- return (
- self.key_identifier == other.key_identifier
- and self.authority_cert_issuer == other.authority_cert_issuer
- and self.authority_cert_serial_number
- == other.authority_cert_serial_number
- )
-
- def __hash__(self) -> int:
- if self.authority_cert_issuer is None:
- aci = None
- else:
- aci = tuple(self.authority_cert_issuer)
- return hash(
- (self.key_identifier, aci, self.authority_cert_serial_number)
- )
-
- @property
- def key_identifier(self) -> typing.Optional[bytes]:
- return self._key_identifier
-
- @property
- def authority_cert_issuer(
- self,
- ) -> typing.Optional[typing.List[GeneralName]]:
- return self._authority_cert_issuer
-
- @property
- def authority_cert_serial_number(self) -> typing.Optional[int]:
- return self._authority_cert_serial_number
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class SubjectKeyIdentifier(ExtensionType):
- oid = ExtensionOID.SUBJECT_KEY_IDENTIFIER
-
- def __init__(self, digest: bytes) -> None:
- self._digest = digest
-
- @classmethod
- def from_public_key(
- cls, public_key: CertificatePublicKeyTypes
- ) -> SubjectKeyIdentifier:
- return cls(_key_identifier_from_public_key(public_key))
-
- @property
- def digest(self) -> bytes:
- return self._digest
-
- @property
- def key_identifier(self) -> bytes:
- return self._digest
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, SubjectKeyIdentifier):
- return NotImplemented
-
- return constant_time.bytes_eq(self.digest, other.digest)
-
- def __hash__(self) -> int:
- return hash(self.digest)
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class AuthorityInformationAccess(ExtensionType):
- oid = ExtensionOID.AUTHORITY_INFORMATION_ACCESS
-
- def __init__(
- self, descriptions: typing.Iterable[AccessDescription]
- ) -> None:
- descriptions = list(descriptions)
- if not all(isinstance(x, AccessDescription) for x in descriptions):
- raise TypeError(
- "Every item in the descriptions list must be an "
- "AccessDescription"
- )
-
- self._descriptions = descriptions
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions")
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, AuthorityInformationAccess):
- return NotImplemented
-
- return self._descriptions == other._descriptions
-
- def __hash__(self) -> int:
- return hash(tuple(self._descriptions))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class SubjectInformationAccess(ExtensionType):
- oid = ExtensionOID.SUBJECT_INFORMATION_ACCESS
-
- def __init__(
- self, descriptions: typing.Iterable[AccessDescription]
- ) -> None:
- descriptions = list(descriptions)
- if not all(isinstance(x, AccessDescription) for x in descriptions):
- raise TypeError(
- "Every item in the descriptions list must be an "
- "AccessDescription"
- )
-
- self._descriptions = descriptions
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions")
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, SubjectInformationAccess):
- return NotImplemented
-
- return self._descriptions == other._descriptions
-
- def __hash__(self) -> int:
- return hash(tuple(self._descriptions))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class AccessDescription:
- def __init__(
- self, access_method: ObjectIdentifier, access_location: GeneralName
- ) -> None:
- if not isinstance(access_method, ObjectIdentifier):
- raise TypeError("access_method must be an ObjectIdentifier")
-
- if not isinstance(access_location, GeneralName):
- raise TypeError("access_location must be a GeneralName")
-
- self._access_method = access_method
- self._access_location = access_location
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, AccessDescription):
- return NotImplemented
-
- return (
- self.access_method == other.access_method
- and self.access_location == other.access_location
- )
-
- def __hash__(self) -> int:
- return hash((self.access_method, self.access_location))
-
- @property
- def access_method(self) -> ObjectIdentifier:
- return self._access_method
-
- @property
- def access_location(self) -> GeneralName:
- return self._access_location
-
-
-class BasicConstraints(ExtensionType):
- oid = ExtensionOID.BASIC_CONSTRAINTS
-
- def __init__(self, ca: bool, path_length: typing.Optional[int]) -> None:
- if not isinstance(ca, bool):
- raise TypeError("ca must be a boolean value")
-
- if path_length is not None and not ca:
- raise ValueError("path_length must be None when ca is False")
-
- if path_length is not None and (
- not isinstance(path_length, int) or path_length < 0
- ):
- raise TypeError(
- "path_length must be a non-negative integer or None"
- )
-
- self._ca = ca
- self._path_length = path_length
-
- @property
- def ca(self) -> bool:
- return self._ca
-
- @property
- def path_length(self) -> typing.Optional[int]:
- return self._path_length
-
- def __repr__(self) -> str:
- return (
- ""
- ).format(self)
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, BasicConstraints):
- return NotImplemented
-
- return self.ca == other.ca and self.path_length == other.path_length
-
- def __hash__(self) -> int:
- return hash((self.ca, self.path_length))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class DeltaCRLIndicator(ExtensionType):
- oid = ExtensionOID.DELTA_CRL_INDICATOR
-
- def __init__(self, crl_number: int) -> None:
- if not isinstance(crl_number, int):
- raise TypeError("crl_number must be an integer")
-
- self._crl_number = crl_number
-
- @property
- def crl_number(self) -> int:
- return self._crl_number
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, DeltaCRLIndicator):
- return NotImplemented
-
- return self.crl_number == other.crl_number
-
- def __hash__(self) -> int:
- return hash(self.crl_number)
-
- def __repr__(self) -> str:
- return f""
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class CRLDistributionPoints(ExtensionType):
- oid = ExtensionOID.CRL_DISTRIBUTION_POINTS
-
- def __init__(
- self, distribution_points: typing.Iterable[DistributionPoint]
- ) -> None:
- distribution_points = list(distribution_points)
- if not all(
- isinstance(x, DistributionPoint) for x in distribution_points
- ):
- raise TypeError(
- "distribution_points must be a list of DistributionPoint "
- "objects"
- )
-
- self._distribution_points = distribution_points
-
- __len__, __iter__, __getitem__ = _make_sequence_methods(
- "_distribution_points"
- )
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CRLDistributionPoints):
- return NotImplemented
-
- return self._distribution_points == other._distribution_points
-
- def __hash__(self) -> int:
- return hash(tuple(self._distribution_points))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class FreshestCRL(ExtensionType):
- oid = ExtensionOID.FRESHEST_CRL
-
- def __init__(
- self, distribution_points: typing.Iterable[DistributionPoint]
- ) -> None:
- distribution_points = list(distribution_points)
- if not all(
- isinstance(x, DistributionPoint) for x in distribution_points
- ):
- raise TypeError(
- "distribution_points must be a list of DistributionPoint "
- "objects"
- )
-
- self._distribution_points = distribution_points
-
- __len__, __iter__, __getitem__ = _make_sequence_methods(
- "_distribution_points"
- )
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, FreshestCRL):
- return NotImplemented
-
- return self._distribution_points == other._distribution_points
-
- def __hash__(self) -> int:
- return hash(tuple(self._distribution_points))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class DistributionPoint:
- def __init__(
- self,
- full_name: typing.Optional[typing.Iterable[GeneralName]],
- relative_name: typing.Optional[RelativeDistinguishedName],
- reasons: typing.Optional[typing.FrozenSet[ReasonFlags]],
- crl_issuer: typing.Optional[typing.Iterable[GeneralName]],
- ) -> None:
- if full_name and relative_name:
- raise ValueError(
- "You cannot provide both full_name and relative_name, at "
- "least one must be None."
- )
- if not full_name and not relative_name and not crl_issuer:
- raise ValueError(
- "Either full_name, relative_name or crl_issuer must be "
- "provided."
- )
-
- if full_name is not None:
- full_name = list(full_name)
- if not all(isinstance(x, GeneralName) for x in full_name):
- raise TypeError(
- "full_name must be a list of GeneralName objects"
- )
-
- if relative_name:
- if not isinstance(relative_name, RelativeDistinguishedName):
- raise TypeError(
- "relative_name must be a RelativeDistinguishedName"
- )
-
- if crl_issuer is not None:
- crl_issuer = list(crl_issuer)
- if not all(isinstance(x, GeneralName) for x in crl_issuer):
- raise TypeError(
- "crl_issuer must be None or a list of general names"
- )
-
- if reasons and (
- not isinstance(reasons, frozenset)
- or not all(isinstance(x, ReasonFlags) for x in reasons)
- ):
- raise TypeError("reasons must be None or frozenset of ReasonFlags")
-
- if reasons and (
- ReasonFlags.unspecified in reasons
- or ReasonFlags.remove_from_crl in reasons
- ):
- raise ValueError(
- "unspecified and remove_from_crl are not valid reasons in a "
- "DistributionPoint"
- )
-
- self._full_name = full_name
- self._relative_name = relative_name
- self._reasons = reasons
- self._crl_issuer = crl_issuer
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, DistributionPoint):
- return NotImplemented
-
- return (
- self.full_name == other.full_name
- and self.relative_name == other.relative_name
- and self.reasons == other.reasons
- and self.crl_issuer == other.crl_issuer
- )
-
- def __hash__(self) -> int:
- if self.full_name is not None:
- fn: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple(
- self.full_name
- )
- else:
- fn = None
-
- if self.crl_issuer is not None:
- crl_issuer: typing.Optional[
- typing.Tuple[GeneralName, ...]
- ] = tuple(self.crl_issuer)
- else:
- crl_issuer = None
-
- return hash((fn, self.relative_name, self.reasons, crl_issuer))
-
- @property
- def full_name(self) -> typing.Optional[typing.List[GeneralName]]:
- return self._full_name
-
- @property
- def relative_name(self) -> typing.Optional[RelativeDistinguishedName]:
- return self._relative_name
-
- @property
- def reasons(self) -> typing.Optional[typing.FrozenSet[ReasonFlags]]:
- return self._reasons
-
- @property
- def crl_issuer(self) -> typing.Optional[typing.List[GeneralName]]:
- return self._crl_issuer
-
-
-class ReasonFlags(utils.Enum):
- unspecified = "unspecified"
- key_compromise = "keyCompromise"
- ca_compromise = "cACompromise"
- affiliation_changed = "affiliationChanged"
- superseded = "superseded"
- cessation_of_operation = "cessationOfOperation"
- certificate_hold = "certificateHold"
- privilege_withdrawn = "privilegeWithdrawn"
- aa_compromise = "aACompromise"
- remove_from_crl = "removeFromCRL"
-
-
-# These are distribution point bit string mappings. Not to be confused with
-# CRLReason reason flags bit string mappings.
-# ReasonFlags ::= BIT STRING {
-# unused (0),
-# keyCompromise (1),
-# cACompromise (2),
-# affiliationChanged (3),
-# superseded (4),
-# cessationOfOperation (5),
-# certificateHold (6),
-# privilegeWithdrawn (7),
-# aACompromise (8) }
-_REASON_BIT_MAPPING = {
- 1: ReasonFlags.key_compromise,
- 2: ReasonFlags.ca_compromise,
- 3: ReasonFlags.affiliation_changed,
- 4: ReasonFlags.superseded,
- 5: ReasonFlags.cessation_of_operation,
- 6: ReasonFlags.certificate_hold,
- 7: ReasonFlags.privilege_withdrawn,
- 8: ReasonFlags.aa_compromise,
-}
-
-_CRLREASONFLAGS = {
- ReasonFlags.key_compromise: 1,
- ReasonFlags.ca_compromise: 2,
- ReasonFlags.affiliation_changed: 3,
- ReasonFlags.superseded: 4,
- ReasonFlags.cessation_of_operation: 5,
- ReasonFlags.certificate_hold: 6,
- ReasonFlags.privilege_withdrawn: 7,
- ReasonFlags.aa_compromise: 8,
-}
-
-
-class PolicyConstraints(ExtensionType):
- oid = ExtensionOID.POLICY_CONSTRAINTS
-
- def __init__(
- self,
- require_explicit_policy: typing.Optional[int],
- inhibit_policy_mapping: typing.Optional[int],
- ) -> None:
- if require_explicit_policy is not None and not isinstance(
- require_explicit_policy, int
- ):
- raise TypeError(
- "require_explicit_policy must be a non-negative integer or "
- "None"
- )
-
- if inhibit_policy_mapping is not None and not isinstance(
- inhibit_policy_mapping, int
- ):
- raise TypeError(
- "inhibit_policy_mapping must be a non-negative integer or None"
- )
-
- if inhibit_policy_mapping is None and require_explicit_policy is None:
- raise ValueError(
- "At least one of require_explicit_policy and "
- "inhibit_policy_mapping must not be None"
- )
-
- self._require_explicit_policy = require_explicit_policy
- self._inhibit_policy_mapping = inhibit_policy_mapping
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, PolicyConstraints):
- return NotImplemented
-
- return (
- self.require_explicit_policy == other.require_explicit_policy
- and self.inhibit_policy_mapping == other.inhibit_policy_mapping
- )
-
- def __hash__(self) -> int:
- return hash(
- (self.require_explicit_policy, self.inhibit_policy_mapping)
- )
-
- @property
- def require_explicit_policy(self) -> typing.Optional[int]:
- return self._require_explicit_policy
-
- @property
- def inhibit_policy_mapping(self) -> typing.Optional[int]:
- return self._inhibit_policy_mapping
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class CertificatePolicies(ExtensionType):
- oid = ExtensionOID.CERTIFICATE_POLICIES
-
- def __init__(self, policies: typing.Iterable[PolicyInformation]) -> None:
- policies = list(policies)
- if not all(isinstance(x, PolicyInformation) for x in policies):
- raise TypeError(
- "Every item in the policies list must be a "
- "PolicyInformation"
- )
-
- self._policies = policies
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_policies")
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CertificatePolicies):
- return NotImplemented
-
- return self._policies == other._policies
-
- def __hash__(self) -> int:
- return hash(tuple(self._policies))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class PolicyInformation:
- def __init__(
- self,
- policy_identifier: ObjectIdentifier,
- policy_qualifiers: typing.Optional[
- typing.Iterable[typing.Union[str, UserNotice]]
- ],
- ) -> None:
- if not isinstance(policy_identifier, ObjectIdentifier):
- raise TypeError("policy_identifier must be an ObjectIdentifier")
-
- self._policy_identifier = policy_identifier
-
- if policy_qualifiers is not None:
- policy_qualifiers = list(policy_qualifiers)
- if not all(
- isinstance(x, (str, UserNotice)) for x in policy_qualifiers
- ):
- raise TypeError(
- "policy_qualifiers must be a list of strings and/or "
- "UserNotice objects or None"
- )
-
- self._policy_qualifiers = policy_qualifiers
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, PolicyInformation):
- return NotImplemented
-
- return (
- self.policy_identifier == other.policy_identifier
- and self.policy_qualifiers == other.policy_qualifiers
- )
-
- def __hash__(self) -> int:
- if self.policy_qualifiers is not None:
- pq: typing.Optional[
- typing.Tuple[typing.Union[str, UserNotice], ...]
- ] = tuple(self.policy_qualifiers)
- else:
- pq = None
-
- return hash((self.policy_identifier, pq))
-
- @property
- def policy_identifier(self) -> ObjectIdentifier:
- return self._policy_identifier
-
- @property
- def policy_qualifiers(
- self,
- ) -> typing.Optional[typing.List[typing.Union[str, UserNotice]]]:
- return self._policy_qualifiers
-
-
-class UserNotice:
- def __init__(
- self,
- notice_reference: typing.Optional[NoticeReference],
- explicit_text: typing.Optional[str],
- ) -> None:
- if notice_reference and not isinstance(
- notice_reference, NoticeReference
- ):
- raise TypeError(
- "notice_reference must be None or a NoticeReference"
- )
-
- self._notice_reference = notice_reference
- self._explicit_text = explicit_text
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, UserNotice):
- return NotImplemented
-
- return (
- self.notice_reference == other.notice_reference
- and self.explicit_text == other.explicit_text
- )
-
- def __hash__(self) -> int:
- return hash((self.notice_reference, self.explicit_text))
-
- @property
- def notice_reference(self) -> typing.Optional[NoticeReference]:
- return self._notice_reference
-
- @property
- def explicit_text(self) -> typing.Optional[str]:
- return self._explicit_text
-
-
-class NoticeReference:
- def __init__(
- self,
- organization: typing.Optional[str],
- notice_numbers: typing.Iterable[int],
- ) -> None:
- self._organization = organization
- notice_numbers = list(notice_numbers)
- if not all(isinstance(x, int) for x in notice_numbers):
- raise TypeError("notice_numbers must be a list of integers")
-
- self._notice_numbers = notice_numbers
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, NoticeReference):
- return NotImplemented
-
- return (
- self.organization == other.organization
- and self.notice_numbers == other.notice_numbers
- )
-
- def __hash__(self) -> int:
- return hash((self.organization, tuple(self.notice_numbers)))
-
- @property
- def organization(self) -> typing.Optional[str]:
- return self._organization
-
- @property
- def notice_numbers(self) -> typing.List[int]:
- return self._notice_numbers
-
-
-class ExtendedKeyUsage(ExtensionType):
- oid = ExtensionOID.EXTENDED_KEY_USAGE
-
- def __init__(self, usages: typing.Iterable[ObjectIdentifier]) -> None:
- usages = list(usages)
- if not all(isinstance(x, ObjectIdentifier) for x in usages):
- raise TypeError(
- "Every item in the usages list must be an ObjectIdentifier"
- )
-
- self._usages = usages
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_usages")
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, ExtendedKeyUsage):
- return NotImplemented
-
- return self._usages == other._usages
-
- def __hash__(self) -> int:
- return hash(tuple(self._usages))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class OCSPNoCheck(ExtensionType):
- oid = ExtensionOID.OCSP_NO_CHECK
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, OCSPNoCheck):
- return NotImplemented
-
- return True
-
- def __hash__(self) -> int:
- return hash(OCSPNoCheck)
-
- def __repr__(self) -> str:
- return ""
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class PrecertPoison(ExtensionType):
- oid = ExtensionOID.PRECERT_POISON
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, PrecertPoison):
- return NotImplemented
-
- return True
-
- def __hash__(self) -> int:
- return hash(PrecertPoison)
-
- def __repr__(self) -> str:
- return ""
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class TLSFeature(ExtensionType):
- oid = ExtensionOID.TLS_FEATURE
-
- def __init__(self, features: typing.Iterable[TLSFeatureType]) -> None:
- features = list(features)
- if (
- not all(isinstance(x, TLSFeatureType) for x in features)
- or len(features) == 0
- ):
- raise TypeError(
- "features must be a list of elements from the TLSFeatureType "
- "enum"
- )
-
- self._features = features
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_features")
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, TLSFeature):
- return NotImplemented
-
- return self._features == other._features
-
- def __hash__(self) -> int:
- return hash(tuple(self._features))
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class TLSFeatureType(utils.Enum):
- # status_request is defined in RFC 6066 and is used for what is commonly
- # called OCSP Must-Staple when present in the TLS Feature extension in an
- # X.509 certificate.
- status_request = 5
- # status_request_v2 is defined in RFC 6961 and allows multiple OCSP
- # responses to be provided. It is not currently in use by clients or
- # servers.
- status_request_v2 = 17
-
-
-_TLS_FEATURE_TYPE_TO_ENUM = {x.value: x for x in TLSFeatureType}
-
-
-class InhibitAnyPolicy(ExtensionType):
- oid = ExtensionOID.INHIBIT_ANY_POLICY
-
- def __init__(self, skip_certs: int) -> None:
- if not isinstance(skip_certs, int):
- raise TypeError("skip_certs must be an integer")
-
- if skip_certs < 0:
- raise ValueError("skip_certs must be a non-negative integer")
-
- self._skip_certs = skip_certs
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, InhibitAnyPolicy):
- return NotImplemented
-
- return self.skip_certs == other.skip_certs
-
- def __hash__(self) -> int:
- return hash(self.skip_certs)
-
- @property
- def skip_certs(self) -> int:
- return self._skip_certs
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class KeyUsage(ExtensionType):
- oid = ExtensionOID.KEY_USAGE
-
- def __init__(
- self,
- digital_signature: bool,
- content_commitment: bool,
- key_encipherment: bool,
- data_encipherment: bool,
- key_agreement: bool,
- key_cert_sign: bool,
- crl_sign: bool,
- encipher_only: bool,
- decipher_only: bool,
- ) -> None:
- if not key_agreement and (encipher_only or decipher_only):
- raise ValueError(
- "encipher_only and decipher_only can only be true when "
- "key_agreement is true"
- )
-
- self._digital_signature = digital_signature
- self._content_commitment = content_commitment
- self._key_encipherment = key_encipherment
- self._data_encipherment = data_encipherment
- self._key_agreement = key_agreement
- self._key_cert_sign = key_cert_sign
- self._crl_sign = crl_sign
- self._encipher_only = encipher_only
- self._decipher_only = decipher_only
-
- @property
- def digital_signature(self) -> bool:
- return self._digital_signature
-
- @property
- def content_commitment(self) -> bool:
- return self._content_commitment
-
- @property
- def key_encipherment(self) -> bool:
- return self._key_encipherment
-
- @property
- def data_encipherment(self) -> bool:
- return self._data_encipherment
-
- @property
- def key_agreement(self) -> bool:
- return self._key_agreement
-
- @property
- def key_cert_sign(self) -> bool:
- return self._key_cert_sign
-
- @property
- def crl_sign(self) -> bool:
- return self._crl_sign
-
- @property
- def encipher_only(self) -> bool:
- if not self.key_agreement:
- raise ValueError(
- "encipher_only is undefined unless key_agreement is true"
- )
- else:
- return self._encipher_only
-
- @property
- def decipher_only(self) -> bool:
- if not self.key_agreement:
- raise ValueError(
- "decipher_only is undefined unless key_agreement is true"
- )
- else:
- return self._decipher_only
-
- def __repr__(self) -> str:
- try:
- encipher_only = self.encipher_only
- decipher_only = self.decipher_only
- except ValueError:
- # Users found None confusing because even though encipher/decipher
- # have no meaning unless key_agreement is true, to construct an
- # instance of the class you still need to pass False.
- encipher_only = False
- decipher_only = False
-
- return (
- ""
- ).format(self, encipher_only, decipher_only)
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, KeyUsage):
- return NotImplemented
-
- return (
- self.digital_signature == other.digital_signature
- and self.content_commitment == other.content_commitment
- and self.key_encipherment == other.key_encipherment
- and self.data_encipherment == other.data_encipherment
- and self.key_agreement == other.key_agreement
- and self.key_cert_sign == other.key_cert_sign
- and self.crl_sign == other.crl_sign
- and self._encipher_only == other._encipher_only
- and self._decipher_only == other._decipher_only
- )
-
- def __hash__(self) -> int:
- return hash(
- (
- self.digital_signature,
- self.content_commitment,
- self.key_encipherment,
- self.data_encipherment,
- self.key_agreement,
- self.key_cert_sign,
- self.crl_sign,
- self._encipher_only,
- self._decipher_only,
- )
- )
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class NameConstraints(ExtensionType):
- oid = ExtensionOID.NAME_CONSTRAINTS
-
- def __init__(
- self,
- permitted_subtrees: typing.Optional[typing.Iterable[GeneralName]],
- excluded_subtrees: typing.Optional[typing.Iterable[GeneralName]],
- ) -> None:
- if permitted_subtrees is not None:
- permitted_subtrees = list(permitted_subtrees)
- if not permitted_subtrees:
- raise ValueError(
- "permitted_subtrees must be a non-empty list or None"
- )
- if not all(isinstance(x, GeneralName) for x in permitted_subtrees):
- raise TypeError(
- "permitted_subtrees must be a list of GeneralName objects "
- "or None"
- )
-
- self._validate_tree(permitted_subtrees)
-
- if excluded_subtrees is not None:
- excluded_subtrees = list(excluded_subtrees)
- if not excluded_subtrees:
- raise ValueError(
- "excluded_subtrees must be a non-empty list or None"
- )
- if not all(isinstance(x, GeneralName) for x in excluded_subtrees):
- raise TypeError(
- "excluded_subtrees must be a list of GeneralName objects "
- "or None"
- )
-
- self._validate_tree(excluded_subtrees)
-
- if permitted_subtrees is None and excluded_subtrees is None:
- raise ValueError(
- "At least one of permitted_subtrees and excluded_subtrees "
- "must not be None"
- )
-
- self._permitted_subtrees = permitted_subtrees
- self._excluded_subtrees = excluded_subtrees
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, NameConstraints):
- return NotImplemented
-
- return (
- self.excluded_subtrees == other.excluded_subtrees
- and self.permitted_subtrees == other.permitted_subtrees
- )
-
- def _validate_tree(self, tree: typing.Iterable[GeneralName]) -> None:
- self._validate_ip_name(tree)
- self._validate_dns_name(tree)
-
- def _validate_ip_name(self, tree: typing.Iterable[GeneralName]) -> None:
- if any(
- isinstance(name, IPAddress)
- and not isinstance(
- name.value, (ipaddress.IPv4Network, ipaddress.IPv6Network)
- )
- for name in tree
- ):
- raise TypeError(
- "IPAddress name constraints must be an IPv4Network or"
- " IPv6Network object"
- )
-
- def _validate_dns_name(self, tree: typing.Iterable[GeneralName]) -> None:
- if any(
- isinstance(name, DNSName) and "*" in name.value for name in tree
- ):
- raise ValueError(
- "DNSName name constraints must not contain the '*' wildcard"
- " character"
- )
-
- def __repr__(self) -> str:
- return (
- "".format(self)
- )
-
- def __hash__(self) -> int:
- if self.permitted_subtrees is not None:
- ps: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple(
- self.permitted_subtrees
- )
- else:
- ps = None
-
- if self.excluded_subtrees is not None:
- es: typing.Optional[typing.Tuple[GeneralName, ...]] = tuple(
- self.excluded_subtrees
- )
- else:
- es = None
-
- return hash((ps, es))
-
- @property
- def permitted_subtrees(
- self,
- ) -> typing.Optional[typing.List[GeneralName]]:
- return self._permitted_subtrees
-
- @property
- def excluded_subtrees(
- self,
- ) -> typing.Optional[typing.List[GeneralName]]:
- return self._excluded_subtrees
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class Extension(typing.Generic[ExtensionTypeVar]):
- def __init__(
- self, oid: ObjectIdentifier, critical: bool, value: ExtensionTypeVar
- ) -> None:
- if not isinstance(oid, ObjectIdentifier):
- raise TypeError(
- "oid argument must be an ObjectIdentifier instance."
- )
-
- if not isinstance(critical, bool):
- raise TypeError("critical must be a boolean value")
-
- self._oid = oid
- self._critical = critical
- self._value = value
-
- @property
- def oid(self) -> ObjectIdentifier:
- return self._oid
-
- @property
- def critical(self) -> bool:
- return self._critical
-
- @property
- def value(self) -> ExtensionTypeVar:
- return self._value
-
- def __repr__(self) -> str:
- return (
- ""
- ).format(self)
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, Extension):
- return NotImplemented
-
- return (
- self.oid == other.oid
- and self.critical == other.critical
- and self.value == other.value
- )
-
- def __hash__(self) -> int:
- return hash((self.oid, self.critical, self.value))
-
-
-class GeneralNames:
- def __init__(self, general_names: typing.Iterable[GeneralName]) -> None:
- general_names = list(general_names)
- if not all(isinstance(x, GeneralName) for x in general_names):
- raise TypeError(
- "Every item in the general_names list must be an "
- "object conforming to the GeneralName interface"
- )
-
- self._general_names = general_names
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names")
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[UniformResourceIdentifier],
- typing.Type[RFC822Name],
- ],
- ) -> typing.List[str]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[DirectoryName],
- ) -> typing.List[Name]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[RegisteredID],
- ) -> typing.List[ObjectIdentifier]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[IPAddress]
- ) -> typing.List[_IPAddressTypes]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[OtherName]
- ) -> typing.List[OtherName]:
- ...
-
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[DirectoryName],
- typing.Type[IPAddress],
- typing.Type[OtherName],
- typing.Type[RFC822Name],
- typing.Type[RegisteredID],
- typing.Type[UniformResourceIdentifier],
- ],
- ) -> typing.Union[
- typing.List[_IPAddressTypes],
- typing.List[str],
- typing.List[OtherName],
- typing.List[Name],
- typing.List[ObjectIdentifier],
- ]:
- # Return the value of each GeneralName, except for OtherName instances
- # which we return directly because it has two important properties not
- # just one value.
- objs = (i for i in self if isinstance(i, type))
- if type != OtherName:
- return [i.value for i in objs]
- return list(objs)
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, GeneralNames):
- return NotImplemented
-
- return self._general_names == other._general_names
-
- def __hash__(self) -> int:
- return hash(tuple(self._general_names))
-
-
-class SubjectAlternativeName(ExtensionType):
- oid = ExtensionOID.SUBJECT_ALTERNATIVE_NAME
-
- def __init__(self, general_names: typing.Iterable[GeneralName]) -> None:
- self._general_names = GeneralNames(general_names)
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names")
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[UniformResourceIdentifier],
- typing.Type[RFC822Name],
- ],
- ) -> typing.List[str]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[DirectoryName],
- ) -> typing.List[Name]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[RegisteredID],
- ) -> typing.List[ObjectIdentifier]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[IPAddress]
- ) -> typing.List[_IPAddressTypes]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[OtherName]
- ) -> typing.List[OtherName]:
- ...
-
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[DirectoryName],
- typing.Type[IPAddress],
- typing.Type[OtherName],
- typing.Type[RFC822Name],
- typing.Type[RegisteredID],
- typing.Type[UniformResourceIdentifier],
- ],
- ) -> typing.Union[
- typing.List[_IPAddressTypes],
- typing.List[str],
- typing.List[OtherName],
- typing.List[Name],
- typing.List[ObjectIdentifier],
- ]:
- return self._general_names.get_values_for_type(type)
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, SubjectAlternativeName):
- return NotImplemented
-
- return self._general_names == other._general_names
-
- def __hash__(self) -> int:
- return hash(self._general_names)
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class IssuerAlternativeName(ExtensionType):
- oid = ExtensionOID.ISSUER_ALTERNATIVE_NAME
-
- def __init__(self, general_names: typing.Iterable[GeneralName]) -> None:
- self._general_names = GeneralNames(general_names)
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names")
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[UniformResourceIdentifier],
- typing.Type[RFC822Name],
- ],
- ) -> typing.List[str]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[DirectoryName],
- ) -> typing.List[Name]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[RegisteredID],
- ) -> typing.List[ObjectIdentifier]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[IPAddress]
- ) -> typing.List[_IPAddressTypes]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[OtherName]
- ) -> typing.List[OtherName]:
- ...
-
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[DirectoryName],
- typing.Type[IPAddress],
- typing.Type[OtherName],
- typing.Type[RFC822Name],
- typing.Type[RegisteredID],
- typing.Type[UniformResourceIdentifier],
- ],
- ) -> typing.Union[
- typing.List[_IPAddressTypes],
- typing.List[str],
- typing.List[OtherName],
- typing.List[Name],
- typing.List[ObjectIdentifier],
- ]:
- return self._general_names.get_values_for_type(type)
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, IssuerAlternativeName):
- return NotImplemented
-
- return self._general_names == other._general_names
-
- def __hash__(self) -> int:
- return hash(self._general_names)
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class CertificateIssuer(ExtensionType):
- oid = CRLEntryExtensionOID.CERTIFICATE_ISSUER
-
- def __init__(self, general_names: typing.Iterable[GeneralName]) -> None:
- self._general_names = GeneralNames(general_names)
-
- __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names")
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[UniformResourceIdentifier],
- typing.Type[RFC822Name],
- ],
- ) -> typing.List[str]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[DirectoryName],
- ) -> typing.List[Name]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self,
- type: typing.Type[RegisteredID],
- ) -> typing.List[ObjectIdentifier]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[IPAddress]
- ) -> typing.List[_IPAddressTypes]:
- ...
-
- @typing.overload
- def get_values_for_type(
- self, type: typing.Type[OtherName]
- ) -> typing.List[OtherName]:
- ...
-
- def get_values_for_type(
- self,
- type: typing.Union[
- typing.Type[DNSName],
- typing.Type[DirectoryName],
- typing.Type[IPAddress],
- typing.Type[OtherName],
- typing.Type[RFC822Name],
- typing.Type[RegisteredID],
- typing.Type[UniformResourceIdentifier],
- ],
- ) -> typing.Union[
- typing.List[_IPAddressTypes],
- typing.List[str],
- typing.List[OtherName],
- typing.List[Name],
- typing.List[ObjectIdentifier],
- ]:
- return self._general_names.get_values_for_type(type)
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CertificateIssuer):
- return NotImplemented
-
- return self._general_names == other._general_names
-
- def __hash__(self) -> int:
- return hash(self._general_names)
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class CRLReason(ExtensionType):
- oid = CRLEntryExtensionOID.CRL_REASON
-
- def __init__(self, reason: ReasonFlags) -> None:
- if not isinstance(reason, ReasonFlags):
- raise TypeError("reason must be an element from ReasonFlags")
-
- self._reason = reason
-
- def __repr__(self) -> str:
- return f""
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, CRLReason):
- return NotImplemented
-
- return self.reason == other.reason
-
- def __hash__(self) -> int:
- return hash(self.reason)
-
- @property
- def reason(self) -> ReasonFlags:
- return self._reason
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class InvalidityDate(ExtensionType):
- oid = CRLEntryExtensionOID.INVALIDITY_DATE
-
- def __init__(self, invalidity_date: datetime.datetime) -> None:
- if not isinstance(invalidity_date, datetime.datetime):
- raise TypeError("invalidity_date must be a datetime.datetime")
-
- self._invalidity_date = invalidity_date
-
- def __repr__(self) -> str:
- return "".format(
- self._invalidity_date
- )
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, InvalidityDate):
- return NotImplemented
-
- return self.invalidity_date == other.invalidity_date
-
- def __hash__(self) -> int:
- return hash(self.invalidity_date)
-
- @property
- def invalidity_date(self) -> datetime.datetime:
- return self._invalidity_date
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class PrecertificateSignedCertificateTimestamps(ExtensionType):
- oid = ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS
-
- def __init__(
- self,
- signed_certificate_timestamps: typing.Iterable[
- SignedCertificateTimestamp
- ],
- ) -> None:
- signed_certificate_timestamps = list(signed_certificate_timestamps)
- if not all(
- isinstance(sct, SignedCertificateTimestamp)
- for sct in signed_certificate_timestamps
- ):
- raise TypeError(
- "Every item in the signed_certificate_timestamps list must be "
- "a SignedCertificateTimestamp"
- )
- self._signed_certificate_timestamps = signed_certificate_timestamps
-
- __len__, __iter__, __getitem__ = _make_sequence_methods(
- "_signed_certificate_timestamps"
- )
-
- def __repr__(self) -> str:
- return "".format(
- list(self)
- )
-
- def __hash__(self) -> int:
- return hash(tuple(self._signed_certificate_timestamps))
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, PrecertificateSignedCertificateTimestamps):
- return NotImplemented
-
- return (
- self._signed_certificate_timestamps
- == other._signed_certificate_timestamps
- )
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class SignedCertificateTimestamps(ExtensionType):
- oid = ExtensionOID.SIGNED_CERTIFICATE_TIMESTAMPS
-
- def __init__(
- self,
- signed_certificate_timestamps: typing.Iterable[
- SignedCertificateTimestamp
- ],
- ) -> None:
- signed_certificate_timestamps = list(signed_certificate_timestamps)
- if not all(
- isinstance(sct, SignedCertificateTimestamp)
- for sct in signed_certificate_timestamps
- ):
- raise TypeError(
- "Every item in the signed_certificate_timestamps list must be "
- "a SignedCertificateTimestamp"
- )
- self._signed_certificate_timestamps = signed_certificate_timestamps
-
- __len__, __iter__, __getitem__ = _make_sequence_methods(
- "_signed_certificate_timestamps"
- )
-
- def __repr__(self) -> str:
- return f""
-
- def __hash__(self) -> int:
- return hash(tuple(self._signed_certificate_timestamps))
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, SignedCertificateTimestamps):
- return NotImplemented
-
- return (
- self._signed_certificate_timestamps
- == other._signed_certificate_timestamps
- )
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class OCSPNonce(ExtensionType):
- oid = OCSPExtensionOID.NONCE
-
- def __init__(self, nonce: bytes) -> None:
- if not isinstance(nonce, bytes):
- raise TypeError("nonce must be bytes")
-
- self._nonce = nonce
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, OCSPNonce):
- return NotImplemented
-
- return self.nonce == other.nonce
-
- def __hash__(self) -> int:
- return hash(self.nonce)
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def nonce(self) -> bytes:
- return self._nonce
-
- def public_bytes(self) -> bytes:
- return rust_x509.encode_extension_value(self)
-
-
-class OCSPAcceptableResponses(ExtensionType):
- oid = OCSPExtensionOID.ACCEPTABLE_RESPONSES
-
- def __init__(self, responses: typing.Iterable[ObjectIdentifier]) -> None:
- responses = list(responses)
- if any(not isinstance(r, ObjectIdentifier) for r in responses):
- raise TypeError("All responses must be ObjectIdentifiers")
-
- self._responses = responses
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, OCSPAcceptableResponses):
- return NotImplemented
-
- return self._responses == other._responses
-
- def __hash__(self) -> int:
- return hash(tuple(self._responses))
-
- def __repr__(self) -> str:
- return f"